Commit Graph

215 Commits

Author SHA1 Message Date
Steve Loughran
74e5018d87 HADOOP-16635. S3A "directories only" scan still does a HEAD.
Contributed by Steve Loughran.

Change-Id: I5e41d7f721364c392e1f4344db83dfa8c5aa06ce
2019-10-14 17:05:52 +01:00
Steve Loughran
dee9e97075 Revert "HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos."
This reverts commit 7a4b3d42c4.

The patch broke TestRouterWebHDFSContractSeek as it turns out that
WebHDFSInputStream.available() is always 0.
2019-10-14 16:56:50 +01:00
Gabor Bota
4a700c20d5
HADOOP-16520. Race condition in DDB table init and waiting threads. (#1576). Contributed by Gabor Bota.
Fixes HADOOP-16349. DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry

Change-Id: Ia83e92b9039ccb780090c99c41b4f71ef7539d35
2019-10-11 12:08:47 +02:00
lqjacklee
7a4b3d42c4
HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.
Contributed by lqjacklee.

Change-Id: I32bb00a683102e7ff8ff8ce0b8d9c3195ca7381c
2019-10-10 21:58:42 +01:00
Steve Loughran
effe6087a5
HADOOP-16650. ITestS3AClosedFS failing.
Contributed by Steve Loughran.

Change-Id: Ia9bb84bd6455e210a54cfe9eb944feeda8b58da9
2019-10-10 17:32:25 +01:00
Steve Loughran
b8086bf54d
HADOOP-16626. S3A ITestRestrictedReadAccess fails without S3Guard.
Contributed by Steve Loughran.

Change-Id: Ife730b80057ddd43e919438cb5b2abbda990e636
2019-10-05 12:52:42 +01:00
Steve Loughran
6574f27fa3
HADOOP-16570. S3A committers encounter scale issues.
Contributed by Steve Loughran.

This addresses two scale issues which has surfaced in large scale benchmarks
of the S3A Committers.

* Thread pools are not cleaned up.
  This now happens, with tests.

* OOM on job commit for jobs with many thousands of tasks,
  each generating tens of (very large) files.

Instead of loading all pending commits into memory as a single list, the list
of files to load is the sole list which is passed around; .pendingset files are
loaded and processed in isolation -and reloaded if necessary for any
abort/rollback operation.

The parallel commit/abort/revert operations now work at the .pendingset level,
rather than that of individual pending commit files. The existing parallelized
Tasks API is still used to commit those files, but with a null thread pool, so
as to serialize the operations.

Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797
2019-10-04 18:54:22 +01:00
Steve Loughran
f44abc3e11
HADOOP-16207 Improved S3A MR tests.
Contributed by Steve Loughran.

Replaces the committer-specific terasort and MR test jobs with parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.

Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095
2019-10-04 14:12:31 +01:00
Siddharth Seth
559ee277f5
HADOOP-16599. Allow a SignerInitializer to be specified along with a Custom Signer 2019-10-02 16:03:48 -07:00
Steve Loughran
1921e94292
HADOOP-16458. LocatedFileStatusFetcher.getFileStatuses failing intermittently with S3
Contributed by Steve Loughran.

Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.

Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7
2019-10-01 18:11:05 +01:00
Steve Loughran
e346e3638c HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.
Contributed by Steve Loughran.

This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client
to offer a specific capability.

This is intended to allow applications to determine

* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.
* Whether a specific feature is believed to be available in the remote store.

As well as a common set of capabilities defined in CommonPathCapabilities,
file systems are free to add their own capabilities, prefixed with
 fs. + schema + .

The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.

Note

* The remote store is not expected to be checked for the feature;
  It is more a check of client API and the client's configuration/knowledge
  of the state of the remote system.
* Permissions are not checked.

Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5
2019-09-25 12:16:41 +01:00
Siddharth Seth
2b5fc95851
HADOOP-16591 Fix S3A ITest*MRjob failures.
Contributed by Siddharth Seth.

Change-Id: I7f08201c9f7c0551514049389b5b398a84855191
2019-09-23 14:58:03 +01:00
Siddharth Seth
e02b1023c2
HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB (#1332) 2019-09-21 11:50:45 +05:30
Steve Loughran
5db32b8ced HADOOP-16547. make sure that s3guard prune sets up the FS (#1402). Contributed by Steve Loughran.
Change-Id: Iaf71561cef6c797a3c66fed110faf08da6cac361
2019-09-18 19:22:15 +02:00
Sahil Takiar
55ce454ce4
HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8.
Contributed by Sahil Takiar.

This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.

Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
2019-09-17 11:32:03 +01:00
Gabor Bota
4e273a31f6
HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) (#1208). Contributed by Gabor Bota.
Change-Id: I6bbb331b6c0a41c61043e482b95504fda8a50596
2019-09-12 13:12:46 +02:00
Steve Loughran
9221704f85
HADOOP-16490. Avoid/handle cached 404s during S3A file creation.
Contributed by Steve Loughran.

This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.

The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.

Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.

The options and values are:

fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7

The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.

Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.

Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
2019-09-11 16:46:25 +01:00
Steve Loughran
511df1e837 HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions
Contributed by Steve Loughran.

This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.
It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.

For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.

Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e
2019-09-05 14:25:15 +01:00
Ewan Higgs
23e532d739 Revert "HADOOP-16193. Add extra S3A MPU test to see what happens if a file is created during the MPU. Contributed by Steve Loughran"
This reverts commit 69ddb36876.
2019-08-26 12:37:26 +02:00
Ewan Higgs
69ddb36876 HADOOP-16193. Add extra S3A MPU test to see what happens if a file is created during the MPU. Contributed by Steve Loughran 2019-08-22 13:56:47 +02:00
Steve Loughran
189dc10884 HADOOP-16481. ITestS3GuardDDBRootOperations.test_300_MetastorePrune needs to set region. (#1209). Contributed by Steve Loughran. 2019-08-09 17:33:08 +02:00
Steve Loughran
e25a5c2eab HADOOP-16499. S3A retry policy to be exponential (#1246). Contributed by Steve Loughran. 2019-08-09 15:52:37 +02:00
Gabor Bota
7b219778e0
HADOOP-16433. S3Guard: Filter expired entries and tombstones when listing with MetadataStore.listChildren().
Contributed by Gabor Bota.

This pulls the tracking of the lastUpdated timestamp of metadata entries up from the DDB metastore into all s3guard stores, and then uses this to filter out expired tombstones from listings.

Change-Id: I80f121236b49c75a024116f65a3ef29d3580b462
2019-07-24 18:11:43 +01:00
Steve Loughran
4317d33232
HADOOP-16380. S3Guard to determine empty directory status for all non-root directories.
Contributed by Steve Loughran and Gabor Bota.

This
* Asks S3Guard to determine the empty directory status.
* Has S3A's root directory rm("/") command to always return false (as abfs does)
* Documents that object stores MAY do this
* Overloads ContractTestUtils.assertDeleted to let assertions declare that the source directory does not need to exist. This stops inconsistencies in directory listings failing a root test.

It avoids a recent regression (HADOOP-16279) where if there was a tombstone above the first element found in a directory listing, the directory would be considered empty, when in fact there were child entries. That could downgrade an rm(path, recursive) to a no-op, while also confusing rename(src, dest), as dest could be mistaken for an empty directory and so permit the copy above it, rather than reject it "destination path exists and is not empty".

Change-Id: I136a3d1a5a48a67e6155d790a40ff558d0d2c108
2019-07-23 14:52:03 +01:00
lqjaclee
cd967c75a7
HADOOP-15847. S3Guard testConcurrentTableCreations to set R/W capacity == 0
Contributed by lqjaclee

Change-Id: I4a4d5b29f2677c188799479e4db38f07fa0591d1
2019-07-19 14:46:55 +01:00
Gabor Bota
c58e11bf52
HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in MetadataStore interface. Contributed by Gabor Bota. (#1009) 2019-07-17 16:24:39 +02:00
Sean Mackrory
5672efa5c7
HADOOP-15729. [s3a] Allow core threads to time out. (#1075) 2019-07-16 18:14:23 -06:00
Steve Loughran
b15ef7dc3d
HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3.
Contributed by Steve Loughran

Contains

- HADOOP-16397. Hadoop S3Guard Prune command to support a -tombstone option.
- HADOOP-16406. ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

This patch doesn't fix the underlying problem but it

* changes some tests to clean up better
* does a lot more in logging operations in against DDB, if enabled
* adds an entry point to dump the state of the metastore and s3 tables (precursor to fsck)
* adds a purge entry point to help clean up after a test run has got a store into a mess
* s3guard prune command adds -tombstone option to only clear tombstones

The outcome is that tests should pass consistently and if problems occur we have better diagnostics.

Change-Id: I3eca3f5529d7f6fec398c0ff0472919f08f054eb
2019-07-12 13:02:25 +01:00
Steve Loughran
6a3433bffd
HADOOP-16357. TeraSort Job failing on S3 DirectoryStagingCommitter: destination path exists.
Contributed by Steve Loughran.

This patch

* changes the default for the staging committer to append, as we get for the classic FileOutputFormat committer
* adds a check for the dest path being a file not a dir
* adds tests for this
* Changes AbstractCommitTerasortIT. to not use the simple parser, so fails if the file is present.

Change-Id: Id53742958ed1cf321ff96c9063505d64f3254f53
2019-07-11 18:15:34 +01:00
Steve Loughran
c7b5f858a0
HADOOP-16393. S3Guard init command uses global settings, not those of target bucket.
Contributed by Steve Loughran.

Change-Id: I226a91ab8d7758340f8d221aa80a7abf9a0d3e8f
2019-07-10 20:57:02 +01:00
Sean Mackrory
de6b7bc67a HADOOP-16409. Allow authoritative mode on non-qualified paths. Contributed by Sean Mackrory 2019-07-08 19:27:07 +02:00
Sean Mackrory
34747c373f
HADOOP-16396. Allow authoritative mode on a subdirectory. (#1043) 2019-07-03 12:04:47 -06:00
Steve Loughran
e02eb24e0a
HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename.
Contributed by Steve Loughran.

Change-Id: I825b0bc36be960475d2d259b1cdab45ae1bb78eb
2019-06-20 09:56:40 +01:00
Sahil Takiar
28291a9e8a
HADOOP-16379: S3AInputStream.unbuffer should merge input stream stats into fs-wide stats
Contributed by Sahil Takiar

Change-Id: I2bcfaaea00d12c633757069402dcd0b91a5f5c05
2019-06-20 09:42:27 +01:00
Gabor Bota
f9cc9e1621
HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries (and tombstones).
Contributed by Gabor Bota.

Change-Id: I73a2d2861901dedfe7a0e783b310fbb95e7c1af9
2019-06-16 17:05:01 +01:00
Steve Loughran
4e38dafde4
HADOOP-15563. S3Guard to support creating on-demand DDB tables.
Contributed by Steve Loughran

Change-Id: I2262b5b9f52e42ded8ed6f50fd39756f96e77087
2019-06-07 18:26:10 +01:00
Steve Loughran
309501c6fa
Revert "HADOOP-16050: s3a SSL connections should use OpenSSL"
This reverts commit b067f8acaa.

Change-Id: I584b050a56c0e6f70b11fa3f7db00d5ac46e7dd8
2019-06-05 13:54:55 +01:00
Steve Loughran
7724d8031b Revert "HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors"
This reverts commit 5906268f0d.
2019-06-05 13:54:42 +01:00
Steve Loughran
0c73dba3a6
HADOOP-16332. Remove S3A dependency on http core.
Contributed by Steve Loughran.

Change-Id: I53209c993a405fefdb5e1b692d5a56d027d3b845
2019-05-28 22:50:37 +01:00
Sahil Takiar
5906268f0d HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors 2019-05-21 11:30:45 -06:00
Ben Roling
a36274d699
HADOOP-16085. S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite.
Contributed by Ben Roling.

S3Guard will now track the etag of uploaded files and, if an S3
bucket is versioned, the object version.

You can then control how to react to a mismatch between the data
in the DynamoDB table and that in the store: warn, fail, or, when
using versions, return the original value.

This adds two new columns to the table: etag and version.
This is transparent to older S3A clients -but when such clients
add/update data to the S3Guard table, they will not add these values.
As a result, the etag/version checks will not work with files uploaded by older clients.

For a consistent experience, upgrade all clients to use the latest hadoop version.
2019-05-19 22:29:54 +01:00
Sahil Takiar
b067f8acaa HADOOP-16050: s3a SSL connections should use OpenSSL
(cherry picked from commit aebf229c175dfa19fff3b31e9e67596f6c6124fa)
2019-05-16 08:57:54 -06:00
Ben Roling
0af4011580
HADOOP-16221. S3Guard: add option to fail operation on metadata write failure. 2019-04-30 11:53:26 +01:00
Ben Roling
e1c5ddf2aa
HADOOP-16252. Add prefix to dynamo tables in tests.
Contributed by Ben Roling.
2019-04-24 14:55:58 +01:00
Sahil Takiar
2382f63fc0
HADOOP-14747. S3AInputStream to implement CanUnbuffer.
Author:    Sahil Takiar <stakiar@cloudera.com>
2019-04-12 18:12:02 -07:00
Steve Loughran
cf4efcab3b
HADOOP-16118. S3Guard to support on-demand DDB tables.
This is the first step for on-demand operations: things recognize when they are using on-demand tables,
as do the tests.

Contributed by Steve Loughran.
2019-04-11 17:12:12 -07:00
Steve Loughran
366186d999
HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)
This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without  using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:
* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests
* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?
* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads
* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:
* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3
2019-04-03 21:23:40 +01:00
Gabor Bota
b5db238383
HADOOP-15999. S3Guard: Better support for out-of-band operations.
Author:    Gabor Bota
2019-03-28 15:59:25 +00:00
Gabor Bota
cfb0186903
HADOOP-16186. S3Guard: NPE in DynamoDBMetadataStore.lambda$listChildren.
Author:    Gabor Bota
2019-03-28 15:49:56 +00:00
Steve Loughran
9f1c017f44
HADOOP-16058. S3A tests to include Terasort.
Contributed by Steve Loughran.

This includes
 - HADOOP-15890. Some S3A committer tests don't match ITest* pattern; don't run in maven
 - MAPREDUCE-7090. BigMapOutput example doesn't work with paths off cluster fs
 - MAPREDUCE-7091. Terasort on S3A to switch to new committers
 - MAPREDUCE-7092. MR examples to work better against cloud stores
2019-03-21 11:15:37 +00:00