S3A input stream support for the few fs.option.openfile settings.
As well as supporting the read policy option and values,
if the file length is declared in fs.option.openfile.length
then no HEAD request will be issued when opening a file.
This can cut a few tens of milliseconds off the operation.
The patch adds a new openfile parameter/FS configuration option
fs.s3a.input.async.drain.threshold (default: 16000).
It declares the number of bytes remaining in the http input stream
above which any operation to read and discard the rest of the stream,
"draining", is executed asynchronously.
This asynchronous draining offers some performance benefit on seek-heavy
file IO.
Contributed by Steve Loughran.
Change-Id: I9b0626bbe635e9fd97ac0f463f5e7167e0111e39
Adds the option fs.s3a.requester.pays.enabled, which, if set to true, allows
the client to access S3 buckets where the requester is billed for the IO.
Contributed by Daniel Carl Jones
Optimize the scan for s3 by performing a deep tree listing,
inferring directory counts from the paths returned.
Contributed by Ahmar Suhail.
Change-Id: I26ffa8c6f65fd11c68a88d6e2243b0eac6ffd024
Multi object delete of size more than 1000 is not supported by S3 and
fails with MalformedXML error. So implementing paging of requests to
reduce the number of keys in a single request. Page size can be configured
using "fs.s3a.bulk.delete.page.size"
Contributed By: Mukund Thakur
Adds a new map type WeakReferenceMap, which stores weak
references to values, and a WeakReferenceThreadMap subclass
to more closely resemble a thread local type, as it is a
map of threadId to value.
Construct it with a factory method and optional callback
for notification on loss and regeneration.
WeakReferenceThreadMap<WrappingAuditSpan> activeSpan =
new WeakReferenceThreadMap<>(
(k) -> getUnbondedSpan(),
this::noteSpanReferenceLost);
This is used in ActiveAuditManagerS3A for span tracking.
Relates to
* HADOOP-17511. Add an Audit plugin point for S3A
* HADOOP-18094. Disable S3A auditing by default.
Contributed by Steve Loughran.
Part of HADOOP-17198. Support S3 Access Points.
HADOOP-18068. "upgrade AWS SDK to 1.12.132" broke the access point endpoint
translation.
Correct endpoints should start with "s3-accesspoint.", after SDK upgrade they start with
"s3.accesspoint-" which messes up tests + region detection by the SDK.
Contributed by Bogdan Stolojan
See HADOOP-18091. S3A auditing leaks memory through ThreadLocal references
* Adds a new option fs.s3a.audit.enabled to controls whether or not auditing
is enabled. This is false by default.
* When false, the S3A auditing manager is NoopAuditManagerS3A,
which was formerly only used for unit tests and
during filsystem initialization.
* When true, ActiveAuditManagerS3A is used for managing auditing,
allowing auditing events to be reported.
* updates documentation and tests.
This patch does not fix the underlying leak. When auditing is enabled,
long-lived threads will retain references to the audit managers
of S3A filesystem instances which have already been closed.
Contributed by Steve Loughran.
Completely removes S3Guard support from the S3A codebase.
If the connector is configured to use any metastore other than
the null and local stores (i.e. DynamoDB is selected) the s3a client
will raise an exception and refuse to initialize.
This is to ensure that there is no mix of S3Guard enabled and disabled
deployments with the same configuration but different hadoop releases
-it must be turned off completely.
The "hadoop s3guard" command has been retained -but the supported
subcommands have been reduced to those which are not purely S3Guard
related: "bucket-info" and "uploads".
This is major change in terms of the number of files
changed; before cherry picking subsequent s3a patches into
older releases, this patch will probably need backporting
first.
Goodbye S3Guard, your work is done. Time to die.
Contributed by Steve Loughran.
This switches the default behavior of S3A output streams
to warning that Syncable.hsync() or hflush() have been
called; it's not considered an error unless the defaults
are overridden.
This avoids breaking applications which call the APIs,
at the risk of people trying to use S3 as a safe store
of streamed data (HBase WALs, audit logs etc).
Contributed by Steve Loughran.
The ordering of the resolution of new and deprecated s3a encryption options & secrets is the same when JCEKS and other hadoop credentials stores are used to store them as
when they are in XML files: per-bucket settings always take priority over global values,
even when the bucket-level options use the old option names.
Contributed by Mehakmeet Singh and Steve Loughran
The option fs.s3a.object.content.encoding declares the content encoding to be set on files when they are written; this is served up in the "Content-Encoding" HTTP header when reading objects back in.
This is useful for people loading the data into other tools in the AWS ecosystem which don't use file extensions to infer compression type (e.g. serving compressed files from S3 or importing into RDS)
Contributed by: Holden Karau
Add support for S3 Access Points. This provides extra security as it
ensures applications are not working with buckets belong to third parties.
To bind a bucket to an access point, set the access point (ap) ARN,
which must be done for each specific bucket, using the pattern
fs.s3a.bucket.$BUCKET.accesspoint.arn = ARN
* The global/bucket option `fs.s3a.accesspoint.required` to
mandate that buckets must declare their access point.
* This is not compatible with S3Guard.
Consult the documentation for further details.
Contributed by Bogdan Stolojan
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.
fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key
The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.
(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)
Contributed by: Mehakmeet Singh
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.
Contributed by Steve Loughran.
Follow on to
* HADOOP-13887. Encrypt S3A data client-side with AWS SDK (S3-CSE)
* HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard enabled
If the S3A bucket is set up to use S3-CSE encryption, all tests which turn
on S3Guard are skipped, so they don't raise any exceptions about
incompatible configurations.
Contributed by: Mehakmeet Singh
Fixes the regression caused by HADOOP-17511 by moving where the
option fs.s3a.acl.default is read -doing it before the RequestFactory
is created.
Adds
* A unit test in TestRequestFactory to verify the ACLs are set
on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
s3:PutObjectAcl in the generated role.
Contributed by Steve Loughran
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.
On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.
After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.
There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb"
Set it to zero and the large file tests are skipped.
Contributed by Steve Loughran.
This improves error handling after multiple failures reading data
-when the read fails and attempts to reconnect() also fail.
Contributed by Bobby Wang.
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
for S3
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems
Contributed by: Bogdan Stolojan
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.
Read the documentation in encryption.md very, very carefully before
use and consider it unstable.
S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":
fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>
You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console.
* Filesystem list/get status operations subtract 16 bytes from the length
of all files >= 16 bytes long to compensate for the padding which CSE
adds.
* The SDK always warns about the specific algorithm chosen being
deprecated. It is critical to use this algorithm for ranged
GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.
Contributed by Mehakmeet Singh
Some network exceptions can raise SdkClientException with message
`Data read has a different length than the expected`.
These should be recoverable.
Contributed by Bogdan Stolojan
This addresses the regression in Hadoop 3.3.1 where if no S3 endpoint
is set in fs.s3a.endpoint, S3A filesystem creation may fail on
non-EC2 deployments, depending on the local host environment setup.
* If fs.s3a.endpoint is empty/null, and fs.s3a.endpoint.region
is null, the region is set to "us-east-1".
* If fs.s3a.endpoint.region is explicitly set to "" then the client
falls back to the SDK region resolution chain; this works on EC2
* Details in troubleshooting.md, including a workaround for Hadoop-3.3.1+
* Also contains some minor restructuring of troubleshooting.md
Contributed by Steve Loughran.
The S3A connector supports
"an auditor", a plugin which is invoked
at the start of every filesystem API call,
and whose issued "audit span" provides a context
for all REST operations against the S3 object store.
The standard auditor sets the HTTP Referrer header
on the requests with information about the API call,
such as process ID, operation name, path,
and even job ID.
If the S3 bucket is configured to log requests, this
information will be preserved there and so can be used
to analyze and troubleshoot storage IO.
Contributed by Steve Loughran.
The option `fs.s3a.endpoint.region` can be used
to explicitly set the AWS region of a bucket.
This is needed when using AWS Private Link, as
the region cannot be automatically determined.
Contributed by Mehakmeet Singh
When the S3A and ABFS filesystems are closed,
their IOStatistics are logged at debug in the log:
org.apache.hadoop.fs.statistics.IOStatisticsLogging
Set `fs.iostatistics.logging.level` to `info` for the statistics
to be logged at info. (also: `warn` or `error` for even higher
log levels).
Contributed by: Mehakmeet Singh
Followup to HADOOP-13327, which changed S3A output stream hsync/hflush calls
to raise an exception.
Adds a new option fs.s3a.downgrade.syncable.exceptions
When true, calls to Syncable hsync/hflush on S3A output streams will
log once at warn (for entire process life, not just the stream), then
increment IOStats with the relevant operation counter
With the downgrade option false (default)
* IOStats are incremented
* The UnsupportedOperationException current raised includes a link to the
JIRA.
Contributed by Steve Loughran.
Moves to the builder API for AWS S3 client creation, and
offers a similar style of API to the S3A FileSystem and tests, hiding
the details of which options are client, which are in AWS Conf,
and doing the wiring up of S3A statistics interfaces to the AWS
SDK internals. S3A Statistics, including IOStatistics, should now
count throttling events handled in the AWS SDK itself.
This patch restores endpoint determination by probes to US-East-1
if the client isn't configured with fs.s3a.endpoint.
Explicitly setting the endpoint will save the cost of these probe
HTTP requests.
Contributed by Steve Loughran.
The S3A connector's rename() operation now raises FileNotFoundException if
the source doesn't exist; a FileAlreadyExistsException if the destination
exists and is unsuitable for the source file/directory.
When renaming to a path which does not exist, the connector no longer checks
for the destination parent directory existing -instead it simply verifies
that there is no file immediately above the destination path.
This is needed to avoid race conditions with delete() and rename()
calls working on adjacent subdirectories.
Contributed by Steve Loughran.
Removed findbugs from the hadoop build images and added spotbugs instead.
Upgraded SpotBugs to 4.2.2 and spotbugs-maven-plugin to 4.2.0.
Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
Use spotbugs instead of findbugs. Removed findbugs from the hadoop build images,
and added spotbugs in the images instead.
Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
Reviewed-by: Inigo Goiri <inigoiri@apache.org>
Reviewed-by: Dinesh Chitlangia <dineshc@apache.org>
Adds an Abortable.abort() interface for streams to enable output streams to be terminated; this
is implemented by the S3A connector's output stream. It allows for commit protocols
to be implemented which commit/abort work by writing to the final destination and
using the abort() call to cancel any write which is not intended to be committed.
Consult the specification document for information about the interface and its use.
Contributed by Jungtaek Lim and Steve Loughran.
This defines what output streams and especially those which implement
Syncable are meant to do, and documents where implementations (HDFS; S3)
don't. With tests.
The file:// FileSystem now supports Syncable if an application calls
FileSystem.setWriteChecksum(false) before creating a file -checksumming
and Syncable.hsync() are incompatible.
Contributed by Steve Loughran.
* core-default.xml updated so that fs.s3a.committer.magic.enabled = true
* CommitConstants updated to match
* All tests which previously enabled the magic committer now rely on
default settings. This helps make sure it is enabled.
* Docs cover the switch, mention its enabled and explain why you may
want to disable it.
Note: this doesn't switch to using the committer -it just enables the path
rewriting magic which it depends on.
Contributed by Steve Loughran.
Caused by HADOOP-16830 and HADOOP-17271.
Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.
Contributed by Steve Loughran.
Change-Id: I65ba3f44444e59d298df25ac5c8dc5a8781dfb7d
Caused by HADOOP-16380 and HADOOP-17271.
Fixes tests which fail intermittently based on configs and
in the case of the HugeFile tests, bulk runs with existing
FS instances meant statistic probes sometimes ended up probing those
of a previous FS.
Contributed by Steve Loughran.
Also fixes HADOOP-16995. ITestS3AConfiguration proxy tests failures when bucket probes == 0
The improvement should include the fix, ebcause the test would fail by default otherwise.
Change-Id: I9a7e4b5e6d4391ebba096c15e84461c038a2ec59
S3A connector to support the IOStatistics API of HADOOP-16830,
This is a major rework of the S3A Statistics collection to
* Embrace the IOStatistics APIs
* Move from direct references of S3AInstrumention statistics
collectors to interface/implementation classes in new packages.
* Ubiquitous support of IOStatistics, including:
S3AFileSystem, input and output streams, RemoteIterator instances
provided in list calls.
* Adoption of new statistic names from hadoop-common
Regarding statistic collection, as well as all existing
statistics, the connector now records min/max/mean durations
of HTTP GET and HEAD requests, and those of LIST operations.
Contributed by Steve Loughran.
The addition of deprecated S3A configuration options in HADOOP-17318
triggered a reload of default (xml resource) configurations, which breaks
tests which fail if there's a per-bucket setting inconsistent with test
setup.
Creating an S3AFS instance before creating the Configuration() instance
for test runs gets that reload out the way before test setup takes
place.
Along with the fix, extra changes in the failing test suite to fail
fast when marker policy isn't as expected, and to log FS state better.
Rather than create and discard an instance, add a new static method
to S3AFS and invoke it in test setup. This forces the load
Change-Id: Id52b1c46912c6fedd2ae270e2b1eb2222a360329
This adds a semaphore to throttle the number of FileSystem instances which
can be created simultaneously, set in "fs.creation.parallel.count".
This is designed to reduce the impact of many threads in an application calling
FileSystem.get() on a filesystem which takes time to instantiate -for example
to an object where HTTPS connections are set up during initialization.
Many threads trying to do this may create spurious delays by conflicting
for access to synchronized blocks, when simply limiting the parallelism
diminishes the conflict, so speeds up all threads trying to access
the store.
The default value, 64, is larger than is likely to deliver any speedup -but
it does mean that there should be no adverse effects from the change.
If a service appears to be blocking on all threads initializing connections to
abfs, s3a or store, try a smaller (possibly significantly smaller) value.
Contributed by Steve Loughran.
This patch
* fixes the inversion
* adds a precondition check
* if the commands are supplied inverted, swaps them with a warning.
This is to stop breaking any tests written to cope with the existing
behavior.
Contributed by Steve Loughran
See also [SPARK-33402]: Jobs launched in same second have duplicate MapReduce JobIDs
Contributed by Steve Loughran.
Change-Id: Iae65333cddc84692997aae5d902ad8765b45772a
This fixes the S3Guard/Directory Marker Retention integration so that when
fs.s3a.directory.marker.retention=keep, failures during multipart delete
are handled correctly, as are incremental deletes during
directory tree operations.
In both cases, when a directory marker with children is deleted from
S3, the directory entry in S3Guard is not deleted, because it is still
critical to representing the structure of the store.
Contributed by Steve Loughran.
Change-Id: I4ca133a23ea582cd42ec35dbf2dc85b286297d2f
Unless you explicitly set it, the issue date of a delegation token identifier is 0, which confuses spark renewal (SPARK-33440). This patch makes sure that all S3A DT identifiers have the current time as issue date, fixing the problem as far as S3A tokens are concerned.
Contributed by Jungtaek Lim.
This reverts changes in HADOOP-13230 to use S3Guard TTL in choosing when
to issue a HEAD request; fixing tests to compensate.
New org.apache.hadoop.fs.s3a.performance.OperationCost cost,
S3GUARD_NONAUTH_FILE_STATUS_PROBE for use in cost tests.
Contributed by Steve Loughran.
Change-Id: I418d55d2d2562a48b2a14ec7dee369db49b4e29e
S3AFileSystem.listStatus() is optimized for invocations
where the path supplied is a non-empty directory.
The number of S3 requests is significantly reduced, saving
time, money, and reducing the risk of S3 throttling.
Contributed by Mukund Thakur.
This changes directory tree deletion so that only files are incrementally deleted
from S3Guard after the objects are deleted; the directories are left alone
until metadataStore.deleteSubtree(path) is invoke.
This avoids directory tombstones being added above files/child directories,
which stop the treewalk and delete phase from working.
Also:
* Callback to delete objects splits files and dirs so that
any problems deleting the dirs doesn't trigger s3guard updates
* New statistic to measure #of objects deleted, alongside request count.
* Callback listFilesAndEmptyDirectories renamed listFilesAndDirectoryMarkers
to clarify behavior.
* Test enhancements to replicate the failure and verify the fix
Contributed by Steve Loughran
Now skips ITestS3AEncryptionWithDefaultS3Settings.testEncryptionOverRename
when server side encryption is not set to sse:kms
Contributed by Mukund Thakur
This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.
This feature is *not* backwards compatible.
Consult the documentation and use with care.
Contributed by Steve Loughran.
Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
Contributed by Steve Loughran.
Fixes a condition which can cause job commit to fail if a task was
aborted < 60s before the job commit commenced: the task abort
will shut down the thread pool with a hard exit after 60s; the
job commit POST requests would be scheduled through the same pool,
so be interrupted and fail. At present the access is synchronized,
but presumably the executor shutdown code is calling wait() and releasing
locks.
Task abort is triggered from the AM when task attempts succeed but
there are still active speculative task attempts running. Thus it
only surfaces when speculation is enabled and the final tasks are
speculating, which, given they are the stragglers, is not unheard of.
Note: this problem has never been seen in production; it has surfaced
in the hadoop-aws tests on a heavily overloaded desktop
Contributed by Steve Loughran.
S3A delegation token providers will be asked for any additional
token issuers, an array can be returned,
each one will be asked for tokens when DelegationTokenIssuer collects
all the tokens for a filesystem.
Contributed by Mukund Thakur and Steve Loughran.
This patch ensures that writes to S3A fail when more than 10,000 blocks are
written. That upper bound still exists. To write massive files, make sure
that the value of fs.s3a.multipart.size is set to a size which is large
enough to upload the files in fewer than 10,000 blocks.
Change-Id: Icec604e2a357ffd38d7ae7bc3f887ff55f2d721a
Contributed by Steve Loughran.
The S3Guard absence warning of HADOOP-16484 has been changed
so that by default the S3A connector only logs at debug
when the connection to the S3 Store does not have S3Guard
enabled.
The option to control this log level is now
fs.s3a.s3guard.disabled.warn.level
and can be one of: silent, inform, warn, fail.
On a failure, an ExitException is raised with exit code 49.
For details on this safety feature, consult the s3guard documentation.
HADOOP-16986. S3A to not need wildfly JAR on its classpath.
Contributed by Steve Loughran
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
Contributed by Mukund Thakur.
If you set the log org.apache.hadoop.fs.s3a.impl.NetworkBinding
to DEBUG, then when the S3A bucket probe is made -the DNS address
of the S3 endpoint is calculated and printed.
This is useful to see if a large set of processes are all using
the same IP address from the pool of load balancers to which AWS
directs clients when an AWS S3 endpoint is resolved.
This can have implications for performance: if all clients
access the same load balancer performance may be suboptimal.
Note: if bucket probes are disabled, fs.s3a.bucket.probe = 0,
the DNS logging does not take place.
Change-Id: I21b3ac429dc0b543f03e357fdeb94c2d2a328dd8
add unit test, new ITest and then fix the issue: different schema, bucket == skip
factored out the underlying logic for unit testing; also moved
maybeAddTrailingSlash to S3AUtils (while retaining/forwarnding existing method
in S3AFS).
tested: london, sole failure is
testListingDelete[auth=true](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)
filed HADOOP-16853
Change-Id: I4b8d0024469551eda0ec70b4968cba4abed405ed
Contributed by Ben Roling.
ETag values are unpredictable with some S3 encryption algorithms.
Skip ITestS3AMiscOperations tests which make assertions about etags
when default encryption on a bucket is enabled.
When testing with an AWS an account which lacks the privilege
for a call to getBucketEncryption(), we don't skip the tests.
In the event of failure, developers get to expand the
permissions of the account or relax default encryption settings.
Contributed by Steve Loughran
* move qualify logic to S3AFileSystem.makeQualified()
* make S3AFileSystem.qualify() a private redirect to that
* ITestS3GuardFsShell turned off
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
AreContributed by Mukund Thakur.
This addresses an issue which surfaced with KMS encryption: the wrong
KMS key could be picked up in the S3 COPY operation, so
renamed files, while encrypted, would end up with the
bucket default key.
As well as adding tests in the new suite
ITestS3AEncryptionWithDefaultS3Settings,
AbstractSTestS3AHugeFiles has a new test method to
verify that the encryption settings also work
for large files copied via multipart operations.
Contributed by Sergei Poganshev.
Catches Exception instead of IOException in closeStream()
and so handle exceptions such as SdkClientException by
aborting the wrapped stream. This will increase resilience
to failures, as any which occuring during stream closure
will be caught. Furthermore, because the
underlying HTTP connection is aborted, rather than closed,
it will not be recycled to cause problems on subsequent
operations.
This adds a new option fs.s3a.bucket.probe, range (0-2) to
control which probe for a bucket existence to perform on startup.
0: no checks
1: v1 check (as has been performend until now)
2: v2 bucket check, which also incudes a permission check. Default.
When set to 0, bucket existence checks won't be done
during initialization thus making it faster.
When the bucket is not available in S3,
or if fs.s3a.endpoint points to the wrong instance of a private S3 store
consecutive calls like listing, read, write etc. will fail with
an UnknownStoreException.
Contributed by:
* Mukund Thakur (main patch and tests)
* Rajesh Balamohan (v0 list and performance tests)
* lqjacklee (HADOOP-15990/v2 list)
* Steve Loughran (UnknownStoreException support)
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/UnknownStoreException.java
new file: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md
modified: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
new file: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/MockS3ClientFactory.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
modified: hadoop-tools/hadoop-aws/src/test/resources/core-site.xml
Change-Id: Ic174f803e655af172d81c1274ed92b51bdceb384
Contributed by Steve Loughran.
During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.
But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.
Fixes
* Page size for delete object requests is set in
fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
can be set to false to disable throttle retry logic in the AWS
client SDK -it is all handled in the S3A client. This
gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
load over time.
* DynamoDB metastore throttle resilience issues have also been
identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
flag does not apply to DDB IO precisely because there may still be
lurking issues there and it safest to rely on the DynamoDB client
SDK.
Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
Contributed by Steve Loughran.
This fixes two problems with S3Guard authoritative mode and
the auth directory flags which are stored in DynamoDB.
1. mkdirs was creating dir markers without the auth bit,
forcing needless scans on newly created directories and
files subsequently added; it was only with the first listStatus call
on that directory that the dir would be marked as authoritative -even
though it would be complete already.
2. listStatus(path) would reset the authoritative status bit of all
child directories even if they were already marked as authoritative.
Issue #2 is possibly the most expensive, as any treewalk using listStatus
(e.g globfiles) would clear the auth bit for all child directories before
listing them. And this would happen every single time...
essentially you weren't getting authoritative directory listings.
For the curious, that the major bug was actually found during testing
-we'd all missed it during reviews.
A lesson there: the better the tests the fewer the bugs.
Maybe also: something obvious and significant can get by code reviews.
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/LocalMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/NullMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardWriteBack.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestRestrictedReadAccess.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestPartialDeleteFailures.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreAuthoritativeMode.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardFsck.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
modified: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
Change-Id: Ic3ffda13f2af2430afedd50fd657b595c83e90a7
Contributed by Mustafa Iman.
This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported
Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.
Change-Id: I407745341068b702bf8f401fb96450a9f987c51c
* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file
works with S3AFileStatus FS and S3ALocatedFileStatus
Introduces `openssl` as an option for `fs.s3a.ssl.channel.mode`.
The new option is documented and marked as experimental.
For details on how to use this, consult the peformance document
in the s3a documentation.
This patch is the successor to HADOOP-16050 "S3A SSL connections
should use OpenSSL" -which was reverted because of
incompatibilities between the wildfly OpenSSL client and the AWS
HTTPS servers (HADOOP-16347). With the Wildfly release moved up
to 1.0.7.Final (HADOOP-16405) everything should now work.
Related issues:
* HADOOP-15669. ABFS: Improve HTTPS Performance
* HADOOP-16050: S3A SSL connections should use OpenSSL
* HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8
* HADOOP-16405: Upgrade Wildfly Openssl version to 1.0.7.Final
Contributed by Sahil Takiar
Change-Id: I80a4bc5051519f186b7383b2c1cea140be42444e
Contains:
HADOOP-16474. S3Guard ProgressiveRenameTracker to mark destination
dirirectory as authoritative on success.
HADOOP-16684. S3guard bucket info to list a bit more about
authoritative paths.
HADOOP-16722. S3GuardTool to support FilterFileSystem.
This patch improves the marking of newly created/import directory
trees in S3Guard DynamoDB tables as authoritative.
Specific changes:
* Renamed directories are marked as authoritative if the entire
operation succeeded (HADOOP-16474).
* When updating parent table entries as part of any table write,
there's no overwriting of their authoritative flag.
s3guard import changes:
* new -verbose flag to print out what is going on.
* The "s3guard import" command lets you declare that a directory tree
is to be marked as authoritative
hadoop s3guard import -authoritative -verbose s3a://bucket/path
When importing a listing and a file is found, the import tool queries
the metastore and only updates the entry if the file is different from
before, where different == new timestamp, etag, or length. S3Guard can get
timestamp differences due to clock skew in PUT operations.
As the recursive list performed by the import command doesn't retrieve the
versionID, the existing entry may in fact be more complete.
When updating an existing due to clock skew the existing version ID
is propagated to the new entry (note: the etags must match; this is needed
to deal with inconsistent listings).
There is a new s3guard command to audit a s3guard bucket/path's
authoritative state:
hadoop s3guard authoritative -check-config s3a://bucket/path
This is primarily for testing/auditing.
The s3guard bucket-info command also provides some more details on the
authoritative state of a store (HADOOP-16684).
Change-Id: I58001341c04f6f3597fcb4fcb1581ccefeb77d91