Commit Graph

675 Commits

Author SHA1 Message Date
Mehakmeet Singh
4c8cd61961
HADOOP-17461. Collect thread-level IOStatistics. (#4352)
This adds a thread-level collector of IOStatistics, IOStatisticsContext,
which can be:
* Retrieved for a thread and cached for access from other
  threads.
* reset() to record new statistics.
* Queried for live statistics through the
  IOStatisticsSource.getIOStatistics() method.
* Queries for a statistics aggregator for use in instrumented
  classes.
* Asked to create a serializable copy in snapshot()

The goal is to make it possible for applications with multiple
threads performing different work items simultaneously
to be able to collect statistics on the individual threads,
and so generate aggregate reports on the total work performed
for a specific job, query or similar unit of work.

Some changes in IOStatistics-gathering classes are needed for 
this feature
* Caching the active context's aggregator in the object's
  constructor
* Updating it in close()

Slightly more work is needed in multithreaded code,
such as the S3A committers, which collect statistics across
all threads used in task and job commit operations.

Currently the IOStatisticsContext-aware classes are:
* The S3A input stream, output stream and list iterators.
* RawLocalFileSystem's input and output streams.
* The S3A committers.
* The TaskPool class in hadoop-common, which propagates
  the active context into scheduled worker threads.

Collection of statistics in the IOStatisticsContext
is disabled process-wide by default until the feature 
is considered stable.

To enable the collection, set the option
fs.thread.level.iostatistics.enabled
to "true" in core-site.xml;
	
Contributed by Mehakmeet Singh and Steve Loughran
2022-07-26 20:41:22 +01:00
ashutoshpant
bac2219e3c
HADOOP-18330. S3AFileSystem removes Path when calling createS3Client (#4572)
Adds a new parameter object in s3ClientCreationParameters that holds 
the full s3a path URI

Contributed by Ashutosh Pant
2022-07-21 10:16:39 +01:00
Mukund Thakur
4d1f6f9b99 HADOOP-18106: Handle memory fragmentation in S3A Vectored IO. (#4445)
part of HADOOP-18103.
Handling memory fragmentation in S3A vectored IO implementation by
allocating smaller user range requested size buffers and directly
filling them from the remote S3 stream and skipping undesired
data in between ranges.
This patch also adds aborting active vectored reads when stream is
closed or unbuffer() is called.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
1408dd89a7 HADOOP-18107 Adding scale test for vectored reads for large file (#4273)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
5db0f34e29 HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads (#3964)
Part of HADOOP-18103.
Introducing fs.s3a.vectored.read.min.seek.size and fs.s3a.vectored.read.max.merged.size
to configure min seek and max read during a vectored IO operation in S3A connector.
These properties actually define how the ranges will be merged. To completely
disable merging set fs.s3a.max.readsize.vectored.read to 0.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
2daf0a814f HADOOP-11867. Add a high-performance vectored read API. (#3904)
part of HADOOP-18103.
Add support for multiple ranged vectored read api in PositionedReadable.
The default iterates through the ranges to read each synchronously,
but the intent is that FSDataInputStream subclasses can make more
efficient readers especially in object stores implementation.

Also added implementation in S3A where smaller ranges are merged and
sliced byte buffers are returned to the readers. All the merged ranged are
fetched from S3 asynchronously.

Contributed By: Owen O'Malley and Mukund Thakur
2022-06-22 17:29:32 +01:00
Steve Loughran
e199da3fae
HADOOP-17833. Improve Magic Committer performance (#3289)
Speed up the magic committer with key changes being

* Writes under __magic always retain directory markers

* File creation under __magic skips all overwrite checks,
  including the LIST call intended to stop files being
	created over dirs.
* mkdirs under __magic probes the path for existence
  but does not look any further.  	

Extra parallelism in task and job commit directory scanning
Use of createFile and openFile with parameters which all for
HEAD checks to be skipped.

The committer can write the summary _SUCCESS file to the path
`fs.s3a.committer.summary.report.directory`, which can be in a
different file system/bucket if desired, using the job id as
the filename. 

Also: HADOOP-15460. S3A FS to add `fs.s3a.create.performance`

Application code can set the createFile() option
fs.s3a.create.performance to true to disable the same
safety checks when writing under magic directories.
Use with care.

The createFile option prefix `fs.s3a.create.header.`
can be used to add custom headers to S3 objects when
created.


Contributed by Steve Loughran.
2022-06-17 19:11:35 +01:00
monthonk
5ac55b405d
HADOOP-12020. Add s3a storage class option fs.s3a.create.storage.class (#3877)
Adds a new option fs.s3a.create.storage.class which can
be used to set the storage class for files created in AWS S3.
Consult the documentation for details and instructions on how
disable the relevant tests when testing against third-party
stores.

Contributed by Monthon Klongklaew
2022-06-08 19:05:17 +01:00
Ashutosh Gupta
a46ef5f2eb
HADOOP-18234. Fix s3a access point xml examples (#4309)
Contributed by Ashutosh Gupta
2022-05-16 17:47:14 +01:00
Daniel Carl Jones
4230162a76
HADOOP-18168. Fix S3A ITestMarkerTool use of purged public bucket. (#4140)
This moves off use of the purged s3a://landsat-pds bucket, so fixing tests
which had started failing.
* Adds a new class, PublicDatasetTestUtils to manage the use of public datasets.
* The new test bucket s3a://usgs-landsat/ is requester pays, so depends upon
  HADOOP-14661.

Consult the updated test documentation when running against other S3 stores.

Contributed by Daniel Carl Jones

Change-Id: Ie8585e4d9b67667f8cb80b2970225d79a4f8d257
2022-05-03 14:28:08 +01:00
Steve Loughran
6ec39d45c9 Revert "HADOOP-18168. . (#4140)"
This reverts commit 6ab7b72cd6.
2022-05-03 14:27:52 +01:00
Daniel Carl Jones
6ab7b72cd6
HADOOP-18168. . (#4140)
This moves off use of the purged s3a://landsat-pds bucket, so fixing tests
which had started failing.
* Adds a new class, PublicDatasetTestUtils to manage the use of public datasets.
* The new test bucket s3a://usgs-landsat/ is requester pays, so depends upon
  HADOOP-14661.

Consult the updated  test documentation when running against other S3 stores.

Contributed by Daniel Carl Jones
2022-05-03 14:26:52 +01:00
Steve Loughran
e0cd0a82e0
HADOOP-16202. Enhanced openFile(): hadoop-aws changes. (#2584/3)
S3A input stream support for the few fs.option.openfile settings.
As well as supporting the read policy option and values,
if the file length is declared in fs.option.openfile.length
then no HEAD request will be issued when opening a file.
This can cut a few tens of milliseconds off the operation.

The patch adds a new openfile parameter/FS configuration option
fs.s3a.input.async.drain.threshold (default: 16000).
It declares the number of bytes remaining in the http input stream
above which any operation to read and discard the rest of the stream,
"draining", is executed asynchronously.
This asynchronous draining offers some performance benefit on seek-heavy
file IO.

Contributed by Steve Loughran.

Change-Id: I9b0626bbe635e9fd97ac0f463f5e7167e0111e39
2022-04-24 17:33:05 +01:00
Daniel Carl Jones
a6ebc42671
HADOOP-18201. Remove endpoint config overrides for ITestS3ARequesterPays (#4169)
Contributed by Daniel Carl Jones.
2022-04-14 16:21:34 +01:00
Daniel Carl Jones
9edfe30a60
HADOOP-14661. Add S3 requester pays bucket support to S3A (#3962)
Adds the option fs.s3a.requester.pays.enabled, which, if set to true, allows
the client to access S3 buckets where the requester is billed for the IO.

Contributed by Daniel Carl Jones
2022-03-23 20:00:50 +00:00
Steve Loughran
708a0ce21b
HADOOP-13704. Optimized S3A getContentSummary()
Optimize the scan for s3 by performing a deep tree listing,
inferring directory counts from the paths returned.

Contributed by Ahmar Suhail.

Change-Id: I26ffa8c6f65fd11c68a88d6e2243b0eac6ffd024
2022-03-22 13:21:12 +00:00
Mukund Thakur
672e380c4f
HADOOP-18112: Implement paging during multi object delete. (#4045)
Multi object delete of size more than 1000 is not supported by S3 and 
fails with MalformedXML error. So implementing paging of requests to 
reduce the number of keys in a single request. Page size can be configured
using "fs.s3a.bulk.delete.page.size" 

 Contributed By: Mukund Thakur
2022-03-11 13:05:45 +05:30
Viraj Jasani
66b72406bd
HADOOP-18131. Upgrade maven enforcer plugin and relevant dependencies (#4000)
Reviewed-by: Akira Ajisaka <aajisaka@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
Signed-off-by: Takanobu Asanuma <tasanuma@apache.org>
2022-03-08 17:27:04 +09:00
Mehakmeet Singh
6995374b54
HADOOP-18150. Fix ITestAuditManagerDisabled test in S3A. (#4044)
Contributed by Mehakmeet Singh
2022-03-03 18:44:28 +00:00
monthonk
1f157f802d
HADOOP-17386. Change default fs.s3a.buffer.dir to be under Yarn container path on yarn applications (#3908)
Co-authored-by: Monthon Klongklaew <monthonk@amazon.com>
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
2022-02-22 13:50:27 +09:00
Steve Loughran
efdec92cab
HADOOP-18091. S3A auditing leaks memory through ThreadLocal references (#3930)
Adds a new map type WeakReferenceMap, which stores weak
references to values, and a WeakReferenceThreadMap subclass
to more closely resemble a thread local type, as it is a
map of threadId to value.

Construct it with a factory method and optional callback
for notification on loss and regeneration.

 WeakReferenceThreadMap<WrappingAuditSpan> activeSpan =
      new WeakReferenceThreadMap<>(
          (k) -> getUnbondedSpan(),
          this::noteSpanReferenceLost);

This is used in ActiveAuditManagerS3A for span tracking.

Relates to
* HADOOP-17511. Add an Audit plugin point for S3A
* HADOOP-18094. Disable S3A auditing by default.

Contributed by Steve Loughran.
2022-02-10 12:31:41 +00:00
Joey Krabacher
a08e69d33e
HADOOP-18114. Documentation correction in assumed_roles.md (#3949)
Fixes typo in hadoop-aws/assumed_roles.md

Contributed by Joey Krabacher
2022-02-09 10:35:11 +00:00
Petre Bogdan Stolojan
5e7ce26e66
HADOOP-18085. S3 SDK Upgrade causes AccessPoint ARN endpoint mistranslation (#3902)
Part of HADOOP-17198. Support S3 Access Points.

HADOOP-18068. "upgrade AWS SDK to 1.12.132" broke the access point endpoint
translation.

Correct endpoints should start with "s3-accesspoint.", after SDK upgrade they start with
"s3.accesspoint-" which messes up tests + region detection by the SDK.

Contributed by Bogdan Stolojan
2022-02-04 15:37:08 +00:00
Steve Loughran
b795f6f9a8
HADOOP-18094. Disable S3A auditing by default.
See HADOOP-18091. S3A auditing leaks memory through ThreadLocal references

* Adds a new option fs.s3a.audit.enabled to controls whether or not auditing
is enabled. This is false by default.

* When false, the S3A auditing manager is NoopAuditManagerS3A,
which was formerly only used for unit tests and
during filsystem initialization.

* When true, ActiveAuditManagerS3A is used for managing auditing,
allowing auditing events to be reported.

* updates documentation and tests.

This patch does not fix the underlying leak. When auditing is enabled,
long-lived threads will retain references to the audit managers
of S3A filesystem instances which have already been closed.

Contributed by Steve Loughran.
2022-01-24 13:37:33 +00:00
Steve Loughran
d8ab84275e
HADOOP-18068. upgrade AWS SDK to 1.12.132 (#3864)
With this update, the versions of key shaded dependencies are

  jackson    2.12.3
  httpclient 4.5.13

Contributed by Steve Loughran
2022-01-18 10:31:28 +00:00
Steve Loughran
14ba19af06
HADOOP-17409. Remove s3guard from S3A module (#3534)
Completely removes S3Guard support from the S3A codebase.

If the connector is configured to use any metastore other than
the null and local stores (i.e. DynamoDB is selected) the s3a client
will raise an exception and refuse to initialize.

This is to ensure that there is no mix of S3Guard enabled and disabled
deployments with the same configuration but different hadoop releases
-it must be turned off completely.

The "hadoop s3guard" command has been retained -but the supported
subcommands have been reduced to those which are not purely S3Guard
related: "bucket-info" and "uploads".

This is major change in terms of the number of files
changed; before cherry picking subsequent s3a patches into
older releases, this patch will probably need backporting
first.

Goodbye S3Guard, your work is done. Time to die.

Contributed by Steve Loughran.
2022-01-17 18:08:57 +00:00
monthonk
b27732c69b
HADOOP-14334. S3 SSEC tests to downgrade when running against a mandatory encryption object store (#3870)
Contributed by Monthon Klongklaew
2022-01-09 18:01:47 +00:00
Ashutosh Gupta
ebdbe7eb82
HADOOP-18057. Fix typo: validateEncrytionSecrets -> validateEncryptionSecrets (#3826) 2021-12-27 16:51:17 +08:00
GuoPhilipse
c65c87f211
HADOOP-18026. Fix default value of Magic committer (#3723)
Contributed by guophilipse
2021-11-29 15:50:30 +00:00
Steve Loughran
98fe0d0fc3
HADOOP-17979. Add Interface EtagSource to allow FileStatus subclasses to provide etags (#3633)
Contributed by Steve Loughran
2021-11-24 17:33:12 +00:00
Mehakmeet Singh
a35f7dec25
HADOOP-18016. Make certain methods LimitedPrivate in S3AUtils.java (#3685)
Contributed By: Mehakmeet Singh
2021-11-24 13:32:59 +05:30
Viraj Jasani
c7ec1897c4
HADOOP-18018. unguava: remove Preconditions from hadoop-tools modules (#3688) 2021-11-23 13:34:10 +09:00
Steve Loughran
6c6d1b64d4
HADOOP-17928. Syncable: S3A to warn and downgrade (#3585)
This switches the default behavior of S3A output streams
to warning that Syncable.hsync() or hflush() have been
called; it's not considered an error unless the defaults
are overridden.

This avoids breaking applications which call the APIs,
at the risk of people trying to use S3 as a safe store
of streamed data (HBase WALs, audit logs etc).

Contributed by Steve Loughran.
2021-11-02 13:26:16 +00:00
Tamas Domok
a4a874f532
HADOOP-17974. Import statements in hadoop-aws trigger clover failures.
Contributed by Tamas Domok
2021-10-21 18:31:28 +01:00
Viraj Jasani
516f36c6f1
HADOOP-17967. Keep restrict-imports-enforcer-rule for Guava VisibleForTesting in hadoop-main pom (#3555) 2021-10-21 16:54:25 +09:00
Mehakmeet Singh
cb8c98fbb0
HADOOP-17953. S3A: Tests to lookup global or per-bucket configuration for encryption algorithm (#3525)
Followup to S3-CSE work of HADOOP-13887

Contributed by Mehakmeet Singh
2021-10-19 10:58:27 +01:00
Viraj Jasani
79e5a7f3e3
HADOOP-17962. Replace Guava VisibleForTesting by Hadoop's own annotation in hadoop-tools modules (#3540) 2021-10-14 17:43:32 +09:00
Viraj Jasani
1151edf12e
HADOOP-17956. Replace all default Charset usage with UTF-8 (#3529)
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
2021-10-14 13:07:24 +09:00
Petre Bogdan Stolojan
33608c3bd4
HADOOP-17951. Improve S3A checking of S3 Access Point existence (#3516)
Follow-on to HADOOP-17198. Support S3 Access Points

Contributed by Bogdan Stolojan
2021-10-04 20:58:22 +01:00
Steve Loughran
d609f44aa0
HADOOP-17922. move to fs.s3a.encryption.algorithm - JCEKS integration (#3466)
The ordering of the resolution of new and deprecated s3a encryption options & secrets is the same when JCEKS and other hadoop credentials stores are used to store them as
when they are in XML files: per-bucket settings always take priority over global values,
even when the bucket-level options use the old option names.

Contributed by Mehakmeet Singh and Steve Loughran
2021-09-30 10:38:53 +01:00
Steve Loughran
2fda61fac6
HADOOP-17851. S3A to support user-specified content encoding (#3498)
The option fs.s3a.object.content.encoding declares the content encoding to be set on files when they are written; this is served up in the "Content-Encoding" HTTP header when reading objects back in.

This is useful for people loading the data into other tools in the AWS ecosystem which don't use file extensions to infer compression type (e.g. serving compressed files from S3 or importing into RDS)

Contributed by: Holden Karau
2021-09-29 13:42:07 +01:00
Petre Bogdan Stolojan
b7c2864613
HADOOP-17198. Support S3 Access Points (#3260)
Add support for S3 Access Points. This provides extra security as it
ensures applications are not working with buckets belong to third parties.

To bind a bucket to an access point, set the access point (ap) ARN,
which must be done for each specific bucket, using the pattern

fs.s3a.bucket.$BUCKET.accesspoint.arn = ARN

* The global/bucket option `fs.s3a.accesspoint.required` to
mandate that buckets must declare their access point.
* This is not compatible with S3Guard.

Consult the documentation for further details.

Contributed by Bogdan Stolojan
2021-09-29 10:54:17 +01:00
Mehakmeet Singh
c54bf19978
HADOOP-17871. S3A CSE: minor tuning (#3412)
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.

fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key

The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.

(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)


Contributed by: Mehakmeet Singh
2021-09-15 22:29:22 +01:00
Steve Loughran
6e3aeb1544
HADOOP-17894. CredentialProviderFactory.getProviders() recursion loading JCEKS file from S3A (#3393)
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.

Contributed by Steve Loughran.
2021-09-07 15:29:37 +01:00
Dongjoon Hyun
265a48e245
HADOOP-17869. fs.s3a.connection.maximum should be bigger than fs.s3a.threads.max (#3337).
The value of `fs.s3a.connection.maximum` has been increased to 96

Contributed by Dongjoon Hyun
2021-08-30 18:30:43 +01:00
Mehakmeet Singh
8d6a686953
HADOOP-17823. S3A S3Guard tests to skip if S3-CSE are enabled (#3263)
Follow on to
* HADOOP-13887. Encrypt S3A data client-side with AWS SDK (S3-CSE)
* HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled

If the S3A bucket is set up to use S3-CSE encryption, all tests which turn
on S3Guard are skipped, so they don't raise any exceptions about
incompatible configurations.

Contributed by: Mehakmeet Singh
2021-08-05 11:46:17 +01:00
Steve Loughran
4627e9c7ef
HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature (#3249)
Fixes the regression caused by HADOOP-17511 by moving where the
option  fs.s3a.acl.default is read -doing it before the RequestFactory
is created.

Adds

* A unit test in TestRequestFactory to verify the ACLs are set
  on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
  do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
  s3:PutObjectAcl in the generated role.

Contributed by Steve Loughran
2021-08-02 15:26:56 +01:00
Steve Loughran
ee466d4b40
HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out. (#3240)
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.

On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.

After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.

There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb" 
Set it to zero and the large file tests are skipped.

Contributed by Steve Loughran.
2021-08-02 11:36:43 +01:00
Bobby Wang
266b1bd1bb
HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store (#3222)
This improves error handling after multiple failures reading data
-when the read fails and attempts to reconnect() also fail.

Contributed by Bobby Wang.
2021-07-30 20:04:11 +01:00
Petre Bogdan Stolojan
a218038960
HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem (#3101)
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
  for S3 
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems

Contributed by: Bogdan Stolojan
2021-07-30 19:42:08 +01:00
Mehakmeet Singh
b19dae8db3
HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled (#3239)
Contributed by Mehakmeet Singh
2021-07-28 15:34:43 +01:00
Mehakmeet Singh
f813554769
HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK (#2706)
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.

Read the documentation in encryption.md very, very carefully before
use and consider it unstable.

S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":

fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>

You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console. 
  
* Filesystem list/get status operations subtract 16 bytes from the length
  of all files >= 16 bytes long to compensate for the padding which CSE
  adds.
* The SDK always warns about the specific algorithm chosen being
  deprecated. It is critical to use this algorithm for ranged
  GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
  The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
  written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.

Contributed by Mehakmeet Singh
2021-07-27 11:08:51 +01:00
Petre Bogdan Stolojan
63dfd84947
HADOOP-17458. S3A to treat "SdkClientException: Data read has a different length than the expected" as EOFException (#3040)
Some network exceptions can raise SdkClientException with message
`Data read has a different length than the expected`.

These should be recoverable.

Contributed by Bogdan Stolojan
2021-07-23 14:44:29 +01:00
Mehakmeet Singh
997d749f8a
HADOOP-17801. No error message reported when bucket doesn't exist in S3AFS (#3202)
Contributed by: Mehakmeet Singh.
2021-07-16 15:27:00 +01:00
Mehakmeet Singh
f6f105c7de
HADOOP-17803. Remove WARN logging from LoggingAuditor when executing a request outside an audit span (#3207)
Followup to HADOOP-17511. "Add audit/telemetry logging to S3A connector"

Contributed by Mehakmeet Singh
2021-07-16 11:47:05 +01:00
Mehakmeet Singh
ea259f236c
HADOOP-17774. S3A bytesRead FS statistic showing twice the correct value (#3144)
Contributed by: Mehakmeet Singh
2021-07-02 14:03:16 +01:00
Zamil Majdy
ed5d10ee48
HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt (#3109)
Contributed by Zamil Majdy.
2021-06-25 20:01:48 +01:00
Steve Loughran
5b7f68ac76
HADOOP-17771. S3AFS creation fails "Unable to find a region via the region provider chain." (#3133)
This addresses the regression in Hadoop 3.3.1 where if no S3 endpoint
is set in fs.s3a.endpoint, S3A filesystem creation may fail on
non-EC2 deployments, depending on the local host environment setup.

* If fs.s3a.endpoint is empty/null, and fs.s3a.endpoint.region
  is null, the region is set to "us-east-1".
* If fs.s3a.endpoint.region is explicitly set to "" then the client
  falls back to the SDK region resolution chain; this works on EC2
* Details in troubleshooting.md, including a workaround for Hadoop-3.3.1+
* Also contains some minor restructuring of troubleshooting.md

Contributed by Steve Loughran.
2021-06-24 16:37:27 +01:00
Petre Bogdan Stolojan
de9ca9f155
HADOOP-17547 Magic committer to downgrade abort in cleanup if list uploads fails with access denied (#3051)
Contributed by Bogdan Stolojan
2021-06-12 17:45:12 +01:00
Viraj Jasani
4ef27a596f
HADOOP-17753. Keep restrict-imports-enforcer-rule for Guava Lists in top level hadoop-main pom (#3087) 2021-06-11 12:15:52 +09:00
Viraj Jasani
f4b24c68e7
HADOOP-17743. Replace Guava Lists usage by Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects (#3072) 2021-06-07 13:24:09 +09:00
Viraj Jasani
986d0a4f1d
HADOOP-17732. Keep restrict-imports-enforcer-rule for Guava Sets in hadoop-main pom (#3049)
Signed-off-by: Takanobu Asanuma <tasanuma@apache.org>
2021-05-26 17:14:31 +09:00
Steve Loughran
832a3c6a89
HADOOP-17511. Add audit/telemetry logging to S3A connector (#2807)
The S3A connector supports
"an auditor", a plugin which is invoked
at the start of every filesystem API call,
and whose issued "audit span" provides a context
for all REST operations against the S3 object store.

The standard auditor sets the HTTP Referrer header
on the requests with information about the API call,
such as process ID, operation name, path,
and even job ID.

If the S3 bucket is configured to log requests, this
information will be preserved there and so can be used
to analyze and troubleshoot storage IO.

Contributed by Steve Loughran.
2021-05-25 10:25:41 +01:00
Mehakmeet Singh
5f400032b6
HADOOP-17705. S3A to add Config to set AWS region (#3020)
The option `fs.s3a.endpoint.region` can be used
to explicitly set the AWS region of a bucket.

This is needed when using AWS Private Link, as
the region cannot be automatically determined.

Contributed by Mehakmeet Singh
2021-05-24 13:08:45 +01:00
Mehakmeet Singh
c665ab02ed
HADOOP-17670. S3AFS and ABFS to log IOStats at DEBUG mode or optionally at INFO level in close() (#2963)
When the S3A and ABFS filesystems are closed,
their IOStatistics are logged at debug in the log:

org.apache.hadoop.fs.statistics.IOStatisticsLogging

Set `fs.iostatistics.logging.level` to `info` for the statistics 
to be logged at info. (also: `warn` or `error` for even higher
log levels).


Contributed by: Mehakmeet Singh
2021-05-24 13:02:11 +01:00
Viraj Jasani
e4062ad027
HADOOP-17115. Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools (#2985)
Signed-off-by: Sean Busbey <busbey@apache.org>
2021-05-20 10:47:04 -05:00
Steve Loughran
68425eb469
HADOOP-16742. NullPointerException in S3A MultiObjectDeleteSupport
Contributed by Tor Arvid Lund.

Change-Id: Iadfe9b2f355cf373031075bfbe681705a2c65bdc
2021-05-04 11:23:01 +01:00
Steve Loughran
88a550bc3a
HADOOP-17112. S3A committers can't handle whitespace in paths. (#2953)
Contributed by Krzysztof Adamski.
2021-04-25 18:33:55 +01:00
Steve Loughran
027c8fb257
HADOOP-17597. Optionally downgrade on S3A Syncable calls (#2801)
Followup to HADOOP-13327, which changed S3A output stream hsync/hflush calls
to raise an exception.

Adds a new option fs.s3a.downgrade.syncable.exceptions

When true, calls to Syncable hsync/hflush on S3A output streams will
log once at warn (for entire process life, not just the stream), then
increment IOStats with the relevant operation counter

With the downgrade option false (default)
* IOStats are incremented
* The UnsupportedOperationException current raised includes a link to the
  JIRA.

Contributed by Steve Loughran.
2021-04-23 18:44:41 +01:00
Steve Loughran
85d3bba555
HADOOP-17476. ITestAssumeRole.testAssumeRoleBadInnerAuth failure. (#2777)
Contributed by Steve Loughran.
2021-03-24 16:47:55 +00:00
Steve Loughran
04880f076d
HADOOP-13551. AWS metrics wire-up (#2778)
Moves to the builder API for AWS S3 client creation, and
offers a similar style of API to the S3A FileSystem and tests, hiding
the details of which options are client, which are in AWS Conf,
and doing the wiring up of S3A statistics interfaces to the AWS
SDK internals. S3A Statistics, including IOStatistics, should now
count throttling events handled in the AWS SDK itself.

This patch restores endpoint determination by probes to US-East-1
if the client isn't configured with fs.s3a.endpoint.

Explicitly setting the endpoint will save the cost of these probe
HTTP requests.

Contributed by Steve Loughran.
2021-03-24 13:32:54 +00:00
Ayush Saxena
03cfc85279
HADOOP-17531. DistCp: Reduce memory usage on copying huge directories. (#2732). Contributed by Ayush Saxena.
Signed-off-by: Steve Loughran <stevel@apache.org>
2021-03-24 02:36:26 +05:30
Steve Loughran
bcd9c67082
HADOOP-16721. Improve S3A rename resilience (#2742)
The S3A connector's rename() operation now raises FileNotFoundException if
the source doesn't exist; a FileAlreadyExistsException if the destination
exists and is unsuitable for the source file/directory.

When renaming to a path which does not exist, the connector no longer checks
for the destination parent directory existing -instead it simply verifies
that there is no file immediately above the destination path.
This is needed to avoid race conditions with delete() and rename()
calls working on adjacent subdirectories.

Contributed by Steve Loughran.
2021-03-11 12:47:39 +00:00
Akira Ajisaka
23b343aed1
HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin (#2753)
Removed findbugs from the hadoop build images and added spotbugs instead.
Upgraded SpotBugs to 4.2.2 and spotbugs-maven-plugin to 4.2.0.

Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
2021-03-11 10:56:07 +09:00
Pierrick Hymbert
ebfba0b6fa
[HADOOP-17567] typo in MagicCommitTracker (#2749)
Contributed by Pierrick Hymbert
2021-03-10 15:39:55 +00:00
Chao Sun
176bd88890
HADOOP-16080. hadoop-aws does not work with hadoop-client-api. (#2522)
Contributed by Chao Sun.

(Cherry-picked via PR #2575)
2021-03-09 20:01:29 +00:00
Akira Ajisaka
9a298d180d
Revert "HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin (#2454)"
This reverts commit 4cf3531583.
2021-02-19 11:09:10 +09:00
Akira Ajisaka
4cf3531583
HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin (#2454)
Use spotbugs instead of findbugs. Removed findbugs from the hadoop build images,
and added spotbugs in the images instead.

Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
Reviewed-by: Inigo Goiri <inigoiri@apache.org>
Reviewed-by: Dinesh Chitlangia <dineshc@apache.org>
2021-02-17 10:38:20 +09:00
Steve Loughran
78905d7e3f
HADOOP-16906. Abortable (#2684)
Adds an Abortable.abort() interface for streams to enable output streams to be terminated; this
is implemented by the S3A connector's output stream. It allows for commit protocols
to be implemented which commit/abort work by writing to the final destination and
using the abort() call to cancel any write which is not intended to be committed.
Consult the specification document for information about the interface and its use.

Contributed by Jungtaek Lim and Steve Loughran.
2021-02-11 17:37:20 +00:00
Steve Loughran
798df6d699
HADOOP-13327 Output Stream Specification. (#2587)
This defines what output streams and especially those which implement
Syncable are meant to do, and documents where implementations (HDFS; S3)
don't. With tests.

The file:// FileSystem now supports Syncable if an application calls
FileSystem.setWriteChecksum(false) before creating a file -checksumming
and Syncable.hsync() are incompatible.

Contributed by Steve Loughran.
2021-02-10 10:28:59 +00:00
Steve Loughran
26b9d480e8
HADOOP-17337. S3A NetworkBinding has a runtime dependency on shaded httpclient. (#2599)
Contributed by Steve Loughran.
2021-02-03 14:29:56 +00:00
Steve Loughran
0bb52a42e5
HADOOP-17483. Magic committer is enabled by default. (#2656)
* core-default.xml updated so that fs.s3a.committer.magic.enabled = true
* CommitConstants updated to match
* All tests which previously enabled the magic committer now rely on
  default settings. This helps make sure it is enabled.
* Docs cover the switch, mention its enabled and explain why you may
  want to disable it.
Note: this doesn't switch to using the committer -it just enables the path
rewriting magic which it depends on.

Contributed by Steve Loughran.
2021-01-27 19:04:22 +00:00
Steve Loughran
28cc912a5c
HADOOP-17493. Revert name of DELEGATION_TOKENS_ISSUED constant/statistic (#2649)
Follow-on to HADOOP-16830/HADOOP-17271.

Contributed by Steve Loughran.
2021-01-27 16:39:29 +00:00
Steve Loughran
80c7404b51
HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark (#2530)
This needs SPARK-33739 in the matching spark branch in order to work

Contributed by Steve Loughran.
2021-01-26 19:30:51 +00:00
Steve Loughran
06a5d3437f
HADOOP-17480. Document that AWS S3 is consistent and that S3Guard is not needed (#2636)
Contributed by Steve Loughran.
2021-01-25 13:21:34 +00:00
Maksim Bober
e2f8503ebd
HADOOP-17484. Typo in hadop-aws index.md (#2634)
Contributed by Maksim Bober.
2021-01-21 17:30:58 +00:00
Steve Loughran
68bc721841
HADOOP-17433. Skipping network I/O in S3A getFileStatus(/) breaks ITestAssumeRole. (#2600)
Contributed by Steve Loughran.
2021-01-19 17:19:27 +00:00
Steve Loughran
724edb0354
HADOOP-17451. IOStatistics test failures in S3A code. (#2594)
Caused by HADOOP-16830 and HADOOP-17271.

Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.

Contributed by Steve Loughran.

Change-Id: I65ba3f44444e59d298df25ac5c8dc5a8781dfb7d
2021-01-12 17:30:32 +00:00
Steve Loughran
05c9c2ed02 Revert "HADOOP-17451. IOStatistics test failures in S3A code. (#2594)"
This reverts commit d3014e01f3.
(fixing commit text before it is frozen)
2021-01-12 17:29:59 +00:00
Steve Loughran
d3014e01f3
HADOOP-17451. IOStatistics test failures in S3A code. (#2594)
Caused by HADOOP-16380 and HADOOP-17271.

Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.

Contributed by Steve Loughran.
2021-01-12 17:25:14 +00:00
Gabor Bota
42eb9ff68e
HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0 (#2593)
Also fixes HADOOP-16995. ITestS3AConfiguration proxy tests failures when bucket probes == 0
The improvement should include the fix, ebcause the test would fail by default otherwise.

Change-Id: I9a7e4b5e6d4391ebba096c15e84461c038a2ec59
2021-01-05 15:43:01 +01:00
Steve Loughran
617af28e80
HADOOP-17271. S3A connector to support IOStatistics. (#2580)
S3A connector to support the IOStatistics API of HADOOP-16830,

This is a major rework of the S3A Statistics collection to

* Embrace the IOStatistics APIs
* Move from direct references of S3AInstrumention statistics
  collectors to interface/implementation classes in new packages.
* Ubiquitous support of IOStatistics, including:
  S3AFileSystem, input and output streams, RemoteIterator instances
  provided in list calls.
* Adoption of new statistic names from hadoop-common

Regarding statistic collection, as well as all existing
statistics, the connector now records min/max/mean durations
of HTTP GET and HEAD requests, and those of LIST operations.

Contributed by Steve Loughran.
2020-12-31 21:55:39 +00:00
yzhangal
3d2193cd64
HADOOP-17338. Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc (#2497)
Yongjun Zhang <yongjunzhang@pinterest.com>
2020-12-18 19:08:10 +00:00
Mukund Thakur
03b4e98971
HADOOP-17398. Skipping network I/O in S3A getFileStatus(/) breaks some tests (#2493)
Follow-on to HADOOP-17323.

Contributed by Mukund Thakur.
2020-11-26 20:25:32 +00:00
Steve Loughran
67dc0928c1
HADOOP-17385. ITestS3ADeleteCost.testDirMarkersFileCreation failure (#2473). Contributed by Steve Loughran
The addition of deprecated S3A configuration options in HADOOP-17318
triggered a reload of default (xml resource) configurations, which breaks
tests which fail if there's a per-bucket setting inconsistent with test
setup.

Creating an S3AFS instance before creating the Configuration() instance
for test runs gets that reload out the way before test setup takes
place.

Along with the fix, extra changes in the failing test suite to fail
fast when marker policy isn't as expected, and to log FS state better.

Rather than create and discard an instance, add a new static method
to S3AFS and invoke it in test setup. This forces the load

Change-Id: Id52b1c46912c6fedd2ae270e2b1eb2222a360329
2020-11-26 13:50:33 +01:00
Steve Loughran
ac7045b75f
HADOOP-17313. FileSystem.get to support slow-to-instantiate FS clients. (#2396)
This adds a semaphore to throttle the number of FileSystem instances which
can be created simultaneously, set in "fs.creation.parallel.count".

This is designed to reduce the impact of many threads in an application calling
FileSystem.get() on a filesystem which takes time to instantiate -for example
to an object where HTTPS connections are set up during initialization.
Many threads trying to do this may create spurious delays by conflicting
for access to synchronized blocks, when simply limiting the parallelism
diminishes the conflict, so speeds up all threads trying to access
the store.

The default value, 64, is larger than is likely to deliver any speedup -but
it does mean that there should be no adverse effects from the change.

If a service appears to be blocking on all threads initializing connections to
abfs, s3a or store, try a smaller (possibly significantly smaller) value.

Contributed by Steve Loughran.
2020-11-25 14:31:02 +00:00
Mukund Thakur
5fee95076b
HADOOP-17323. S3A getFileStatus("/") to skip IO (#2479)
Contributed by Mukund Thakur.
2020-11-24 11:06:56 +00:00
Steve Loughran
9b4faf2b51
HADOOP-17332. S3A MarkerTool -min and -max are inverted. (#2425)
This patch
* fixes the inversion
* adds a precondition check
* if the commands are supplied inverted, swaps them with a warning.
  This is to stop breaking any tests written to cope with the existing
  behavior.

Contributed by Steve Loughran
2020-11-23 20:49:42 +00:00
Steve Loughran
fb79be932c
HADOOP-17343. Upgrade AWS SDK to 1.11.901 (#2468)
Contributed by Steve Loughran.
2020-11-23 14:08:12 +00:00
Jungtaek Lim
f3c629c27e
HADOOP-17388. AbstractS3ATokenIdentifier to issue date in UTC. (#2477)
Followup to HADOOP-17379.

Contributed by Jungtaek Lim.
2020-11-20 10:38:42 +00:00
Steve Loughran
ce7827c82a
HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID. (#2399)
See also [SPARK-33402]: Jobs launched in same second have duplicate MapReduce JobIDs

Contributed by Steve Loughran.

Change-Id: Iae65333cddc84692997aae5d902ad8765b45772a
2020-11-18 13:34:51 +00:00
Steve Loughran
e3c08f285a
HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2310)
This fixes the S3Guard/Directory Marker Retention integration so that when
fs.s3a.directory.marker.retention=keep, failures during multipart delete
are handled correctly, as are incremental deletes during
directory tree operations.

In both cases, when a directory marker with children is deleted from
S3, the directory entry in S3Guard is not deleted, because it is still
critical to representing the structure of the store.

Contributed by Steve Loughran.

Change-Id: I4ca133a23ea582cd42ec35dbf2dc85b286297d2f
2020-11-18 12:18:11 +00:00
Jungtaek Lim
a7b923c80c
HADOOP-17379. AbstractS3ATokenIdentifier to set issue date == now. (#2466)
Unless you explicitly set it, the issue date of a delegation token identifier is 0, which confuses spark renewal (SPARK-33440). This patch makes sure that all S3A DT identifiers have the current time as issue date, fixing the problem as far as S3A tokens are concerned.

Contributed by Jungtaek Lim.
2020-11-17 14:43:29 +00:00
Doroszlai, Attila
dd85a90da6
HADOOP-17376. ITestS3AContractRename failing against stricter tests. (#2462)
Contributed by Attila Doroszlai.
2020-11-16 11:24:00 +00:00
Mukund Thakur
7f8ef76c48
HADOOP-17305. Fix ITestCustomSigner to work with s3 compatible endpoints (#2395)
Contributed by Mukund Thakur
2020-10-21 13:01:13 +01:00
Ayush Saxena
1e3a6efcef
HADOOP-17288. Use shaded guava from thirdparty. (#2342). Contributed by Ayush Saxena. 2020-10-17 12:01:18 +05:30
Dongjoon Hyun
b92f72758b
HADOOP-17258. Magic S3Guard Committer to overwrite existing pendingSet file on task commit (#2371)
Contributed by Dongjoon Hyun and Steve Loughran

Change-Id: Ibaf8082e60eff5298ff4e6513edc386c5bae0274
2020-10-12 13:39:15 +01:00
Steve Loughran
f83e07a20f HADOOP-17293. S3A to always probe S3 in S3A getFileStatus on non-auth paths
This reverts changes in HADOOP-13230 to use S3Guard TTL in choosing when
to issue a HEAD request; fixing tests to compensate.

New org.apache.hadoop.fs.s3a.performance.OperationCost cost,
S3GUARD_NONAUTH_FILE_STATUS_PROBE for use in cost tests.

Contributed by Steve Loughran.

Change-Id: I418d55d2d2562a48b2a14ec7dee369db49b4e29e
2020-10-08 15:35:57 +01:00
Mukund Thakur
82522d60fb
HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem (#2354)
Contains HADOOP-17300: FileSystem.DirListingIterator.next() call should 
return NoSuchElementException

Contributed by Mukund Thakur
2020-10-07 13:59:06 +01:00
Steve Loughran
7fae4133e0
HADOOP-17261. s3a rename() needs s3:deleteObjectVersion permission (#2303)
Contributed by Steve Loughran.
2020-09-22 17:22:04 +01:00
Mukund Thakur
83c7c2b4c4
HADOOP-17023. Tune S3AFileSystem.listStatus() (#2257)
S3AFileSystem.listStatus() is optimized for invocations
where the path supplied is a non-empty directory.
The number of S3 requests is significantly reduced, saving
time, money, and reducing the risk of S3 throttling.

Contributed by Mukund Thakur.
2020-09-21 17:20:16 +01:00
Steve Loughran
958cab804e
Revert "HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2280)"
This reverts commit 9960c01a25.

Change-Id: I820534c3292f2a343693d835f625488c325fb5d6
2020-09-11 18:07:49 +01:00
Steve Loughran
9960c01a25
HADOOP-17244. S3A directory delete tombstones dir markers prematurely. (#2280)
This changes directory tree deletion so that only files are incrementally deleted
from S3Guard after the objects are deleted; the directories are left alone
until metadataStore.deleteSubtree(path) is invoke.

This avoids directory tombstones being added above files/child directories,
which stop the treewalk and delete phase from working.

Also:

* Callback to delete objects splits files and dirs so that
any problems deleting the dirs doesn't trigger s3guard updates
* New statistic to measure #of objects deleted, alongside request count.
* Callback listFilesAndEmptyDirectories renamed listFilesAndDirectoryMarkers
  to clarify behavior.
* Test enhancements to replicate the failure and verify the fix

Contributed by Steve Loughran
2020-09-10 17:03:52 +01:00
Steve Loughran
5346cc3263
HADOOP-17227. S3A Marker Tool tuning (#2254)
Contributed by Steve Loughran.
2020-09-04 14:58:03 +01:00
Mukund Thakur
139a43e98e
HADOOP-17167 ITestS3AEncryptionWithDefaultS3Settings failing (#2187)
Now skips ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename
when server side encryption is not set to sse:kms

Contributed by Mukund Thakur
2020-09-03 19:35:24 +01:00
Mukund Thakur
cc641534dc
HADOOP-17074. S3A Listing to be fully asynchronous. (#2207)
Contributed by Mukund Thakur.
2020-08-25 11:29:43 +01:00
Steve Loughran
5092ea62ec HADOOP-13230. S3A to optionally retain directory markers.
This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.

This feature is *not* backwards compatible.
Consult the documentation and use with care.

Contributed by Steve Loughran.

Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
2020-08-15 12:51:08 +01:00
Mukund Thakur
4a400d3193
HADOOP-17192. ITestS3AHugeFilesSSECDiskBlock failing (#2221)
Contributed by Mukund Thakur
2020-08-13 14:21:49 +01:00
Ayush Saxena
975b6024dd HDFS-15514. Remove useless dfs.webhdfs.enabled. Contributed by Fei Hui. 2020-08-07 22:19:17 +05:30
Mukund Thakur
ac697571a1
HADOOP-17186. Fixing javadoc in ListingOperationCallbacks (#2196) 2020-08-05 20:40:49 +09:00
Mukund Thakur
8fd4f5490f
HADOOP-17131. Refactor S3A Listing code for better isolation. (#2148)
Contributed by Mukund Thakur.
2020-08-04 16:00:02 +01:00
Akira Ajisaka
c40cbc57fa
HADOOP-17091. [JDK11] Fix Javadoc errors (#2098) 2020-08-03 10:46:51 +09:00
Mukund Thakur
bb459d4dd6
HADOOP-17136. ITestS3ADirectoryPerformance.testListOperations failing (#2153)
A regression caused by HADOOP-17022: the reduction in LIST calls broken an assertion.

Contributed by Mukund Thakur
2020-07-20 16:58:50 +01:00
Mukund Thakur
4647a60430
HADOOP-17022. Tune S3AFileSystem.listFiles() API.
Contributed by Mukund Thakur.

Change-Id: I17f5cfdcd25670ce3ddb62c13378c7e2dc06ba52
2020-07-14 15:27:35 +01:00
jimmy-zuber-amzn
806d84b79c
HADOOP-17105. S3AFS - Do not attempt to resolve symlinks in globStatus (#2113)
Contributed by Jimmy Zuber.
2020-07-13 19:07:48 +01:00
Steve Loughran
b9fa5e0182
HDFS-13934. Multipart uploaders to be created through FileSystem/FileContext.
Contributed by Steve Loughran.

Change-Id: Iebd34140c1a0aa71f44a3f4d0fee85f6bdf123a3
2020-07-13 13:30:02 +01:00
Sebastian Nagel
5b1ed2113b
HADOOP-17117 Fix typos in hadoop-aws documentation (#2127) 2020-07-09 00:03:15 +09:00
Steve Loughran
4249c04d45
HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)
Contributed by Steve Loughran.

Fixes a condition which can cause job commit to fail if a task was
aborted < 60s before the job commit commenced: the task abort
will shut down the thread pool with a hard exit after 60s; the
job commit POST requests would be scheduled through the same pool,
so be interrupted and fail. At present the access is synchronized,
but presumably the executor shutdown code is calling wait() and releasing
locks.

Task abort is triggered from the AM when task attempts succeed but
there are still active speculative task attempts running. Thus it
only surfaces when speculation is enabled and the final tasks are
speculating, which, given they are the stragglers, is not unheard of.

Note: this problem has never been seen in production; it has surfaced
in the hadoop-aws tests on a heavily overloaded desktop
2020-06-30 10:44:51 +01:00
Steve Loughran
ac5d899d40
HADOOP-17050 S3A to support additional token issuers
Contributed by Steve Loughran.

S3A delegation token providers will be asked for any additional
token issuers, an array can be returned,
each one will be asked for tokens when DelegationTokenIssuer collects
all the tokens for a filesystem.
2020-06-09 14:39:06 +01:00
Steve Loughran
40d63e02f0
HADOOP-16568. S3A FullCredentialsTokenBinding fails if local credentials are unset. (#1441)
Contributed by Steve Loughran.

Move the loading to deployUnbonded (where they are required) and add a safety check when a new DT is requested
2020-06-03 17:07:00 +01:00
Masatake Iwasaki
9685314633
HADOOP-17040. Fix intermittent failure of ITestBlockingThreadPoolExecutorService. (#2020) 2020-05-22 18:50:19 +09:00
Mukund Thakur
29b19cd592
HADOOP-16900. Very large files can be truncated when written through the S3A FileSystem.
Contributed by Mukund Thakur and Steve Loughran.

This patch ensures that writes to S3A fail when more than 10,000 blocks are
written. That upper bound still exists. To write massive files, make sure
that the value of fs.s3a.multipart.size is set to a size which is large
enough to upload the files in fewer than 10,000 blocks.

Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a
2020-05-20 13:42:25 +01:00
Masatake Iwasaki
0b7799bf6e
HADOOP-16586. ITestS3GuardFsck, others fails when run using a local metastore. (#1950) 2020-05-20 08:47:04 +09:00
Masatake Iwasaki
99840aaba6
HADOOP-17025. Fix invalid metastore configuration in S3GuardTool tests. (#1994) 2020-05-07 12:00:47 +09:00
Steve Loughran
93b662db47
HADOOP-16953. tuning s3guard disabled warnings (#1962)
Contributed by Steve Loughran.

The S3Guard absence warning of HADOOP-16484 has been changed
so that by default the S3A connector only logs at debug
when the connection to the S3 Store does not have S3Guard
enabled.

The option to control this log level is now
fs.s3a.s3guard.disabled.warn.level
and can be one of: silent, inform, warn, fail.

On a failure, an ExitException is raised with exit code 49.

For details on this safety feature, consult the s3guard documentation.
2020-04-20 15:05:55 +01:00
Steve Loughran
42711081e3
HADOOP-16986. S3A to not need wildfly on the classpath. (#1948)
HADOOP-16986. S3A to not need wildfly JAR on its classpath.

Contributed by Steve Loughran

This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.

That patch required wildfly.jar to be on the classpath. This
update:

* Makes wildfly.jar optional except in the special case that 
"fs.s3a.ssl.channel.mode" is set to "openssl"

* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.

This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
2020-04-20 14:32:13 +01:00
Mukund Thakur
56350664a7
HADOOP-13873. log DNS addresses on s3a initialization.
Contributed by Mukund Thakur.

If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding
to DEBUG, then when the S3A bucket probe is made -the DNS address
of the S3 endpoint is calculated and printed.

This is useful to see if a large set of processes are all using
the same IP address from the pool of load balancers to which AWS
directs clients when an AWS S3 endpoint is resolved.

This can have implications for performance: if all clients
access the same load balancer performance may be suboptimal.

Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,
the DNS logging does not take place.

Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8
2020-04-17 14:15:38 +01:00
Mukund Thakur
8505840c26
HADOOP-16979. S3Guard auth mode should be set to false by default in integration tests. (#1958) Contributed by Mukund Thakur. 2020-04-16 13:02:30 +02:00
Mukund Thakur
7b2d84d19c
HADOOP-16465 listLocatedStatus() optimisation (#1943)
Contributed by Mukund Thakur

Optimize S3AFileSystem.listLocatedStatus() to perform list
operations directly and then fallback to head checks for files
2020-04-14 17:19:51 +01:00
Steve Loughran
aeeebc5e79
HADOOP-16941. ITestS3GuardOutOfBandOperations.testListingDelete failing on versioned bucket (#1919)
Contributed by Steve Loughran.

Removed the failing probe and replacing with two probes which will fail
on both versioned and unversioned buckets.
2020-04-14 10:56:00 +01:00
Brahma Reddy Battula
8914cf9167 Preparing for 3.4.0 development 2020-03-29 23:24:25 +05:30
Steve Loughran
eaaaba12b1
HADOOP-16939 fs.s3a.authoritative.path should support multiple FS URIs (#1914)
add unit test, new ITest and then fix the issue: different schema, bucket == skip

factored out the underlying logic for unit testing; also moved
maybeAddTrailingSlash to S3AUtils (while retaining/forwarnding existing method
in S3AFS).

tested: london, sole failure is
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)

filed HADOOP-16853

Change-Id: I4b8d0024469551eda0ec70b4968cba4abed405ed
2020-03-26 12:59:11 -06:00
Nicholas Chammas
25a03bfece
HADOOP-16930. Add hadoop-aws documentation for ProfileCredentialsProvider
Contributed by Nicholas Chammas.
2020-03-25 10:39:35 +00:00
Gabor Bota
c91ff8c18f
HADOOP-16858. S3Guard fsck: Add option to remove orphaned entries (#1851). Contributed by Gabor Bota.
Adding a new feature to S3GuardTool's fsck: -fix. 

Change-Id: I2cdb6601fea1d859b54370046b827ef06eb1107d
2020-03-18 12:48:52 +01:00
Steve Loughran
8d6373483e
HADOOP-16319. S3A Etag tests fail with default encryption enabled on bucket.
Contributed by Ben Roling.

ETag values are unpredictable with some S3 encryption algorithms.

Skip ITestS3AMiscOperations tests which make assertions about etags
when default encryption on a bucket is enabled.

When testing with an AWS an account which lacks the privilege
for a call to getBucketEncryption(), we don't skip the tests.
In the event of failure, developers get to expand the
permissions of the account or relax default encryption settings.
2020-03-17 13:31:48 +00:00
Steve Loughran
0a9b3c98b1
HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard (#1646)
Contributed by Steve Loughran

* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that
* ITestS3GuardFsShell turned off
2020-03-12 14:13:55 +00:00
Steve Loughran
d4d4c37810
HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden
Contributed by Steve Loughran.

Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.

This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.

Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
2020-03-09 14:44:28 +00:00
Gabor Bota
edc2e9d2f1
HADOOP-14936. S3Guard: remove experimental from documentation.
Contributed by Gabor Bota.
2020-03-02 18:16:52 +00:00
Mukund Thakur
f864ef7429
HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS key in rename/copy.
AreContributed by Mukund Thakur.

This addresses an issue which surfaced with KMS encryption: the wrong
KMS key could be picked up in the S3 COPY operation, so
renamed files, while encrypted, would end up with the
bucket default key.

As well as adding tests in the new suite
ITestS3AEncryptionWithDefaultS3Settings,
AbstractSTestS3AHugeFiles has a new test method to
verify that the encryption settings also work
for large files copied via multipart operations.
2020-03-02 17:31:12 +00:00
spoganshev
e553eda9cd
HADOOP-16767 Handle non-IO exceptions in reopen()
Contributed by Sergei Poganshev.

Catches Exception instead of IOException in closeStream() 
and so handle exceptions such as SdkClientException by 
aborting the wrapped stream. This will increase resilience
to failures, as any which occuring during stream closure
will be caught. Furthermore, because the
underlying HTTP connection is aborted, rather than closed,
it will not be recycled to cause problems on subsequent
operations.
2020-03-02 17:17:54 +00:00
Steve Loughran
929004074f
HADOOP-16853. ITestS3GuardOutOfBandOperations failing on versioned S3 buckets (#1840)
Contributed by Steve Loughran.

Signed-off-by: Mingliang Liu <liuml07@apache.org>
2020-02-24 10:45:34 -08:00
Mukund Thakur
e77767bb1e
HADOOP-16711.
This adds a new option fs.s3a.bucket.probe, range (0-2) to
control which probe for a bucket existence to perform on startup.

0: no checks
1: v1 check (as has been performend until now)
2: v2 bucket check, which also incudes a permission check. Default.

When set to 0, bucket existence checks won't be done
during initialization thus making it faster.
When the bucket is not available in S3,
or if fs.s3a.endpoint points to the wrong instance of a private S3 store
consecutive calls like listing, read, write etc. will fail with
an UnknownStoreException.

Contributed by:
  * Mukund Thakur (main patch and tests)
  * Rajesh Balamohan (v0 list and performance tests)
  * lqjacklee (HADOOP-15990/v2 list)
  * Steve Loughran (UnknownStoreException support)

       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
       modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java
       new file:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
       modified:   hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
       new file:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
       modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
       modified:   hadoop-tools/hadoop-aws/src/test/resources/core-site.xml

Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384
2020-02-21 13:44:46 +00:00
lqjacklee
c77fc6971b
HADOOP-15961. S3A committers: make sure there's regular progress() calls.
Contributed by lqjacklee.

Change-Id: I13ca153e1e32b21dbe64d6fb25e260e0ff66154d
2020-02-17 22:06:34 +00:00
Steve Loughran
56dee66770
HADOOP-16823. Large DeleteObject requests are their own Thundering Herd.
Contributed by Steve Loughran.

During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.

But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.

Fixes

* Page size for delete object requests is set in
  fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
  can be set to false to disable throttle retry logic in the AWS
  client SDK -it is all handled in the S3A client. This
  gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
  org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
  often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
  requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
  load over time.
* DynamoDB metastore throttle resilience issues have also been
  identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
  flag does not apply to DDB IO precisely because there may still be
  lurking issues there and it safest to rely on the DynamoDB client
  SDK.

Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
2020-02-13 19:09:49 +00:00
Mukund Thakur
146ca0f545
HADOOP-16832. S3Guard testing doc: Add required parameters for S3Guard testing in IDE. (#1822). Contributed by Mukund Thakur. 2020-02-06 15:13:25 +01:00
Mustafa İman
5977360878
HADOOP-16801. S3Guard listFiles will not query S3 if all listings are authoritative (#1815). Contributed by Mustafa İman. 2020-01-30 11:16:51 +01:00
Steve Loughran
7f40e6688a
HADOOP-16746. mkdirs and s3guard Authoritative mode.
Contributed by Steve Loughran.

This fixes two problems with S3Guard authoritative mode and
the auth directory flags which are stored in DynamoDB.

1. mkdirs was creating dir markers without the auth bit,
   forcing needless scans on newly created directories and
   files subsequently added; it was only with the first listStatus call
   on that directory that the dir would be marked as authoritative -even
   though it would be complete already.

2. listStatus(path) would reset the authoritative status bit of all
   child directories even if they were already marked as authoritative.

Issue #2 is possibly the most expensive, as any treewalk using listStatus
(e.g globfiles) would clear the auth bit for all child directories before
listing them. And this would happen every single time...
essentially you weren't getting authoritative directory listings.

For the curious, that the major bug was actually found during testing
-we'd all missed it during reviews.

A lesson there: the better the tests the fewer the bugs.

Maybe also: something obvious and significant can get by code reviews.

	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestRestrictedReadAccess.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreAuthoritativeMode.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
	modified:   hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java

Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7
2020-01-25 18:35:02 +00:00
Mustafa Iman
839054754b
HADOOP-16792: Make S3 client request timeout configurable.
Contributed by Mustafa Iman.

This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported

Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.

Change-Id: I407745341068b702bf8f401fb96450a9f987c51c
2020-01-24 13:37:07 +00:00
Mingliang Liu
6c1fa24ac0 HADOOP-16732. S3Guard to support encrypted DynamoDB table (#1752). Contributed by Mingliang Liu. 2020-01-23 14:21:42 +01:00
Steve Loughran
5e2ce370a3 HADOOP-16759. Filesystem openFile() builder to take a FileStatus param (#1761). Contributed by Steve Loughran
* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus
2020-01-21 14:31:51 -08:00
Sahil Takiar
f206b736f0
HADOOP-16346. Stabilize S3A OpenSSL support.
Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.

For details on how to use this, consult the peformance document
in the s3a documentation.

This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.

Related issues:

* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final

Contributed by Sahil Takiar

Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
2020-01-21 16:37:51 +00:00
Steve Loughran
49df838995
HADOOP-16697. Tune/audit S3A authoritative mode.
Contains:

HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
              dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
              authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.

This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.

Specific changes:

 * Renamed directories are marked as authoritative if the entire
   operation succeeded (HADOOP-16474).
 * When updating parent table entries as part of any table write,
   there's no overwriting of their authoritative flag.

s3guard import changes:

* new -verbose flag to print out what is going on.

* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative

  hadoop s3guard import -authoritative -verbose s3a://bucket/path

When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.

As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).

There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:

  hadoop s3guard authoritative -check-config s3a://bucket/path

This is primarily for testing/auditing.

The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).

Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91
2020-01-10 11:11:56 +00:00
Steve Loughran
52cc20e9ea
HADOOP-16642. ITestDynamoDBMetadataStoreScale fails when throttled.
Contributed by Steve Loughran.

Change-Id: If9b4ebe937200c17d7fdfb9923e6ae0ab4c541ef
2020-01-08 14:28:20 +00:00
Steve Loughran
2bbf73f1df HADOOP-16645. S3A Delegation Token extension point to use StoreContext.
Contributed by Steve Loughran.

This is part of the ongoing refactoring of the S3A codebase, with the
delegation token support (HADOOP-14556) no longer given a direct reference
to the owning S3AFileSystem. Instead it gets a StoreContext and a new
interface, DelegationOperations, to access those operations offered by S3AFS
which are specifically needed by the DT bindings.

The sole operation needed is listAWSPolicyRules(), which is used to allow
S3A FS and the S3Guard metastore to return the AWS policy rules needed to
access their specific services/buckets/tables, allowing the AssumedRole
delegation token to be locked down.

As further restructuring takes place, that interface's implementation
can be moved to wherever the new home for those operations ends up.

Although it changes the API of an extension point, that feature (S3
Delegation Tokens) has not shipped; backwards compatibility is not a
problem except for anyone who has implemented DT support against trunk.
To those developers: sorry.

Change-Id: I770f58b49ff7634a34875ba37b7d51c94d7c21da
2020-01-07 11:17:37 +00:00
Steve Loughran
382151670b HADOOP-16450. ITestS3ACommitterFactory to not use useInconsistentClient. (#1145)
Contributed by Steve Loughran.

Change-Id: Ifb9771a73a07f744e4ed5f5e6be72473179db439
2019-12-16 14:29:30 +01:00
Mingliang Liu
d12ad9e8ad
HADOOP-16757. Increase timeout unit test rule for MetadataStoreTestBase (#1757)
Contributed by Mingliang Liu.

Signed-off-by: Steve Loughran <stevel@apache.org>
2019-12-13 08:19:27 -08:00
Mingliang Liu
b56c08b2b7
HADOOP-16758. Refine testing.md to tell user better how to use auth-keys.xml (#1753)
Contributed by Mingliang Liu
2019-12-11 11:52:53 -08:00
Gabor Bota
875a3e97dd
HADOOP-16424. S3Guard fsck: Check internal consistency of the MetadataStore (#1691). Contributed by Gabor Bota. 2019-12-10 15:51:49 +01:00
Mingliang Liu
19512b21e3
HADOOP-16735. Make it clearer in config default that EnvironmentVariableCredentialsProvider supports AWS_SESSION_TOKEN. Contributed by Mingliang Liu
This closes #1733
2019-12-05 17:37:17 -08:00
Gabor Bota
ea25f4de23
HADOOP-16709. S3Guard: Make authoritative mode exclusive for metadata - don't check for expiry for authoritative paths (#1721). Contributed by Gabor Bota. 2019-11-26 16:36:19 +01:00
Steve Loughran
9fbfe6c8f9
HADOOP-16632 Speculating & Partitioned S3A magic committers can leave pending files under __magic (#1599)
Contributed by Steve Loughran.

This downgrade the checks for leftover __magic entries from fail to warn now the parallel
test runs make speculation more likely. 

Change-Id: Ia4df2e90f82a06dbae69f3fdaadcbb0e0d713b38
2019-11-19 13:54:33 +00:00
Gabor Bota
cad540819f
HADOOP-16484. S3A to warn or fail if S3Guard is disabled - addendum: silent for S3GuardTool (#1714). Contributed by Gabor Bota.
Change-Id: I63b928ef5da425ef982dd4100a426fc23f64bac1
2019-11-18 13:56:37 +01:00
Steve Loughran
990063d2af
HADOOP-16665. Filesystems to be closed if they failed during initialize().
Contributed by Steve Loughran.

This FileSystem instantiation so if an IOException or RuntimeException is
raised in the invocation of FileSystem.initialize() then a best-effort
attempt is made to close the FS instance; exceptions raised that there
are swallowed.

The S3AFileSystem is also modified to do its own cleanup if an
IOException is raised during its initialize() process, it being the
FS we know has the "potential" to leak threads, especially in
extension points (e.g AWS Authenticators) which spawn threads.

Change-Id: Ib84073a606c9d53bf53cbfca4629876a03894f04
2019-11-12 18:17:21 +00:00
Steve Loughran
f6697aa82b
HADOOP-16477. S3A delegation token tests fail if fs.s3a.encryption.key set.
Contributed by Steve Loughran.

Change-Id: I843989f32472bbdefbd4fa504b26c7a614ab1cee
2019-11-12 15:31:53 +00:00
Takanobu Asanuma
d17ba85482 HADOOP-16681. mvn javadoc:javadoc fails in hadoop-aws. Contributed by Xieming Li 2019-11-05 15:24:59 +09:00
Gabor Bota
dca19fc3aa
HADOOP-16484. S3A to warn or fail if S3Guard is disabled (#1661). Contributed by Gabor Bota. 2019-11-04 12:55:36 +01:00
Gabor Bota
d5e9971e6d
HADOOP-16653. S3Guard DDB overreacts to no tag access (#1660). Contributed by Gabor Bota. 2019-10-28 11:22:41 +01:00
Phil Zampino
1d5d7d0989
HADOOP-16658. S3A connector does not support including the token renewer in the token identifier.
Contributed by Phil Zampino.

Change-Id: Iea9d5028dcf58bda4da985604f5cd3ac283619bd
2019-10-23 16:32:49 +01:00
Steve Loughran
bbcf0b91d6 HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation.
Contributed by Steve Loughran.

Includes HADOOP-16651. S3 getBucketLocation() can return "US" for us-east.

Change-Id: Ifc0dca76e51495ed1a8fc0f077b86bf125deff40
2019-10-16 09:41:33 +01:00
Steve Loughran
74e5018d87 HADOOP-16635. S3A "directories only" scan still does a HEAD.
Contributed by Steve Loughran.

Change-Id: I5e41d7f721364c392e1f4344db83dfa8c5aa06ce
2019-10-14 17:05:52 +01:00
Steve Loughran
dee9e97075 Revert "HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos."
This reverts commit 7a4b3d42c4.

The patch broke TestRouterWebHDFSContractSeek as it turns out that
WebHDFSInputStream.available() is always 0.
2019-10-14 16:56:50 +01:00
Gabor Bota
4a700c20d5
HADOOP-16520. Race condition in DDB table init and waiting threads. (#1576). Contributed by Gabor Bota.
Fixes HADOOP-16349. DynamoDBMetadataStore.getVersionMarkerItem() to log at info/warn on retry

Change-Id: Ia83e92b9039ccb780090c99c41b4f71ef7539d35
2019-10-11 12:08:47 +02:00
lqjacklee
7a4b3d42c4
HADOOP-15870. S3AInputStream.remainingInFile should use nextReadPos.
Contributed by lqjacklee.

Change-Id: I32bb00a683102e7ff8ff8ce0b8d9c3195ca7381c
2019-10-10 21:58:42 +01:00
Steve Loughran
effe6087a5
HADOOP-16650. ITestS3AClosedFS failing.
Contributed by Steve Loughran.

Change-Id: Ia9bb84bd6455e210a54cfe9eb944feeda8b58da9
2019-10-10 17:32:25 +01:00
Steve Loughran
b8086bf54d
HADOOP-16626. S3A ITestRestrictedReadAccess fails without S3Guard.
Contributed by Steve Loughran.

Change-Id: Ife730b80057ddd43e919438cb5b2abbda990e636
2019-10-05 12:52:42 +01:00
Steve Loughran
6574f27fa3
HADOOP-16570. S3A committers encounter scale issues.
Contributed by Steve Loughran.

This addresses two scale issues which has surfaced in large scale benchmarks
of the S3A Committers.

* Thread pools are not cleaned up.
  This now happens, with tests.

* OOM on job commit for jobs with many thousands of tasks,
  each generating tens of (very large) files.

Instead of loading all pending commits into memory as a single list, the list
of files to load is the sole list which is passed around; .pendingset files are
loaded and processed in isolation -and reloaded if necessary for any
abort/rollback operation.

The parallel commit/abort/revert operations now work at the .pendingset level,
rather than that of individual pending commit files. The existing parallelized
Tasks API is still used to commit those files, but with a null thread pool, so
as to serialize the operations.

Change-Id: I5c8240cd31800eaa83d112358770ca0eb2bca797
2019-10-04 18:54:22 +01:00
Steve Loughran
f44abc3e11
HADOOP-16207 Improved S3A MR tests.
Contributed by Steve Loughran.

Replaces the committer-specific terasort and MR test jobs with parameterization
of the (now single tests) and use of file:// over hdfs:// as the cluster FS.

The parameterization ensures that only one of the specific committer tests
run at a time -overloads of the test machines are less likely, and so the
suites can be pulled back into the parallel phase.

There's also more detailed validation of the stage outputs of the terasorting;
if one test fails the rest are all skipped. This and the fact that job
output is stored under target/yarn-${timestamp} means failures should
be more debuggable.

Change-Id: Iefa370ba73c6419496e6e69dd6673d00f37ff095
2019-10-04 14:12:31 +01:00
Siddharth Seth
559ee277f5
HADOOP-16599. Allow a SignerInitializer to be specified along with a Custom Signer 2019-10-02 16:03:48 -07:00
Steve Loughran
1921e94292
HADOOP-16458. LocatedFileStatusFetcher.getFileStatuses failing intermittently with S3
Contributed by Steve Loughran.

Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.

Change-Id: I23bcdb2783d6bd77cf168fdc165b1b4b334d91c7
2019-10-01 18:11:05 +01:00
Xieming Li
c89d22d13a
HADOOP-16602. mvn package fails in hadoop-aws.
Contributed by Xieming Li.

Follow-up to HADOOP-16445

Change-Id: I72c62d55b734a0f67556844f398ef4a50d9ea585
2019-09-25 14:15:35 +01:00
Steve Loughran
e346e3638c HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.
Contributed by Steve Loughran.

This complements the StreamCapabilities Interface by allowing applications to probe for a specific path on a specific instance of a FileSystem client
to offer a specific capability.

This is intended to allow applications to determine

* Whether a method is implemented before calling it and dealing with UnsupportedOperationException.
* Whether a specific feature is believed to be available in the remote store.

As well as a common set of capabilities defined in CommonPathCapabilities,
file systems are free to add their own capabilities, prefixed with
 fs. + schema + .

The plan is to identify and document more capabilities -and for file systems which add new features, for a declaration of the availability of the feature to always be available.

Note

* The remote store is not expected to be checked for the feature;
  It is more a check of client API and the client's configuration/knowledge
  of the state of the remote system.
* Permissions are not checked.

Change-Id: I80bfebe94f4a8bdad8f3ac055495735b824968f5
2019-09-25 12:16:41 +01:00
Siddharth Seth
2b5fc95851
HADOOP-16591 Fix S3A ITest*MRjob failures.
Contributed by Siddharth Seth.

Change-Id: I7f08201c9f7c0551514049389b5b398a84855191
2019-09-23 14:58:03 +01:00
Siddharth Seth
e02b1023c2
HADOOP-16445. Allow separate custom signing algorithms for S3 and DDB (#1332) 2019-09-21 11:50:45 +05:30
Steve Loughran
5db32b8ced HADOOP-16547. make sure that s3guard prune sets up the FS (#1402). Contributed by Steve Loughran.
Change-Id: Iaf71561cef6c797a3c66fed110faf08da6cac361
2019-09-18 19:22:15 +02:00
Gabor Bota
e97f0f1ed9
HADOOP-16565. Region must be provided when requesting session credentials or SdkClientException will be thrown (#1454). Contributed by Gabor Bota. 2019-09-18 10:51:08 +02:00
Sahil Takiar
55ce454ce4
HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8.
Contributed by Sahil Takiar.

This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.

Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
2019-09-17 11:32:03 +01:00
Gabor Bota
1505d3f5ff
HADOOP-16566. S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch (#1433). Contributed by Gabor Bota.
Change-Id: Ied43ef1522dfc6a1210d6fc58c38d8208824931b
2019-09-12 19:04:57 +02:00
Gabor Bota
4e273a31f6
HADOOP-16423. S3Guard fsck: Check metadata consistency between S3 and metadatastore (log) (#1208). Contributed by Gabor Bota.
Change-Id: I6bbb331b6c0a41c61043e482b95504fda8a50596
2019-09-12 13:12:46 +02:00
Steve Loughran
9221704f85
HADOOP-16490. Avoid/handle cached 404s during S3A file creation.
Contributed by Steve Loughran.

This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.

The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.

Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.

The options and values are:

fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7

The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.

Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.

Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
2019-09-11 16:46:25 +01:00
Xieming Li
dc9abd27d9
HADOOP-16554. mvn javadoc:javadoc fails in hadoop-aws.
Contributed by  Xieming Li.

Change-Id: I78e88b5b1ae4702446d2bdd3e2faa3e10b45aef0
2019-09-10 15:05:20 +01:00