This changes directory tree deletion so that only files are incrementally deleted
from S3Guard after the objects are deleted; the directories are left alone
until metadataStore.deleteSubtree(path) is invoke.
This avoids directory tombstones being added above files/child directories,
which stop the treewalk and delete phase from working.
Also:
* Callback to delete objects splits files and dirs so that
any problems deleting the dirs doesn't trigger s3guard updates
* New statistic to measure #of objects deleted, alongside request count.
* Callback listFilesAndEmptyDirectories renamed listFilesAndDirectoryMarkers
to clarify behavior.
* Test enhancements to replicate the failure and verify the fix
Contributed by Steve Loughran
Change-Id: I0e6ea2c35e487267033b1664228c8837279a35c7
Contributed by Steve Loughran.
* Fixes AbstractContractSeekTest test to use readFully
* Doesn't do this to AbstractContractUnbufferTest test as it changes the test too much.
Instead just notes in the error that this may be transient
The issue is that read(buffer) doesn't guarantee that the buffer is filled, only that it will
read up to a point, and that may be just be the amount of data left in the TCP packet.
readFully corrects for this, but using it in the unbuffer test runs the risk that what
is tested for in terms of unbuffering doesn't actually get validated.
Change-Id: I046eadb69b80ba0aac468b354c82c4d510dc3699
This adds an option to disable "empty directory" marker deletion,
so avoid throttling and other scale problems.
This feature is *not* backwards compatible.
Consult the documentation and use with care.
Contributed by Steve Loughran.
Change-Id: I69a61e7584dc36e485d5e39ff25b1e3e559a1958
* Passing C/C++ standard flags -std is
not cross-compiler friendly as not all
compilers support all values.
* Thus, we need to make use of the
appropriate flags provided by CMake in
order to specify the C/C++ standards.
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
(cherry picked from commit 909f1e82d3)
* HDFS-15436. Default mount table name used by ViewFileSystem should be configurable
* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests
* Address Uma's comments on PR#2100
* Sort lists in test to match without concern to order
* Address comments, fix checkstyle and fix failing tests
* Fix checkstyle
(cherry picked from commit bed0a3a374)
Contributed by Steve Loughran.
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
Change-Id: I7ed3e5948d1e10ce21276b3508871709347e113d
Contributed by Steve Loughran.
This moves the new API of HDFS-13616 into a interface which is implemented by
HDFS RPC filesystem client (not WebHDFS or any other connector)
This new interface, BatchListingOperations, is in hadoop-common,
so applications do not need to be compiled with HDFS on the classpath.
They must cast the FS into the interface.
instanceof can probe the client for having the new interface -the patch
also adds a new path capability to probe for this.
The FileSystem implementation is cut; tests updated as appropriate.
All new interfaces/classes/constants are marked as @unstable.
Change-Id: I5623c51f2c75804f58f915dd7e60cb2cffdac681
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
Followup to HADOOP-16885: Encryption zone file copy failure leaks a temp file
Moving the delete() call broke a mocking test, which slipped through the review process.
Contributed by Steve Loughran.
Change-Id: Ia13faf0f4fffb1c99ddd616d823e4f4d0b7b0cbb
Contributed by Xiaoyu Yao.
Contains HDFS-14892. Close the output stream if createWrappedOutputStream() fails
Copying file through the FsShell command into an HDFS encryption zone where
the caller lacks permissions is leaks a temp ._COPYING file
and potentially a wrapped stream unclosed.
This is a convergence of a fix for S3 meeting an issue in HDFS.
S3: a HEAD against a file can cache a 404,
-you must not do any existence checks, including deleteOnExit(),
until the file is written.
Hence: HADOOP-16490, only register files for deletion the create worked
and the upload is not direct.
HDFS-14892. HDFS doesn't close wrapped streams when IOEs are raised on
create() failures. Which means that an entry is retained on the NN.
-you need to register a file with deleteOnExit() even if the file wasn't
created.
This patch:
* Moves the deleteOnExit to ensure the created file get deleted cleanly.
* Fixes HDFS to close the wrapped stream on failures.
Followup to the main openFile().withStatus() patch.
It turns out that this broke the hive builds, which
was not well appreciated.
This patch lists places to review in the hadoop codebase,
and external projects where changes are likely to cause problems.
Contributed by Steve Loughran
Change-Id: Ifac815c65b74d083cd277764b780ac2b5b0f6b36
Contributed by Steve Loughran.
During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.
But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.
Fixes
* Page size for delete object requests is set in
fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
can be set to false to disable throttle retry logic in the AWS
client SDK -it is all handled in the S3A client. This
gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
load over time.
* DynamoDB metastore throttle resilience issues have also been
identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
flag does not apply to DDB IO precisely because there may still be
lurking issues there and it safest to rely on the DynamoDB client
SDK.
Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
This is a regression caused by HADOOP-16759.
The test TestHarFileSystem uses introspection to verify that HarFileSystem
Does not implement methods to which there is a suitable implementation in
the base FileSystem class. Because of the way it checks this, refactoring
(protected) FileSystem methods in an IDE do not automatically change
the probes in TestHarFileSystem.
The changes in HADOOP-16759 did exactly that, and somehow managed
to get through the build/test process without this being noticed.
This patch fixes that failure.
Caused by and fixed by Steve Loughran.
Change-Id: If60d9c97058242871c02ad1addd424478f84f446
Signed-off-by: Mingliang Liu <liuml07@apache.org>
Contributed by Mustafa Iman.
This adds a new configuration option fs.s3a.connection.request.timeout
to declare the time out on HTTP requests to the AWS service;
0 means no timeout.
Measured in seconds; the usual time suffixes are all supported
Important: this is the maximum duration of any AWS service call,
including upload and copy operations. If non-zero, it must be larger
than the time to upload multi-megabyte blocks to S3 from the client,
and to rename many-GB files. Use with care.
Change-Id: I407745341068b702bf8f401fb96450a9f987c51c