* HDFS-15436. Default mount table name used by ViewFileSystem should be configurable
* Replace Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE use in tests
* Address Uma's comments on PR#2100
* Sort lists in test to match without concern to order
* Address comments, fix checkstyle and fix failing tests
* Fix checkstyle
Improving router's performance for delegation tokens related operations. It achieves the goal by removing watchers from router on tokens since based on our experience. The huge number of watches inside Zookeeper is degrading Zookeeper's performance pretty hard. The current limit is about 1.2-1.5 million.
HADOOP-16986. S3A to not need wildfly JAR on its classpath.
Contributed by Steve Loughran
This is a successor to HADOOP-16346, which enabled the S3A connector
to load the native openssl SSL libraries for better HTTPS performance.
That patch required wildfly.jar to be on the classpath. This
update:
* Makes wildfly.jar optional except in the special case that
"fs.s3a.ssl.channel.mode" is set to "openssl"
* Retains the declaration of wildfly.jar as a compile-time
dependency in the hadoop-aws POM. This means that unless
explicitly excluded, applications importing that published
maven artifact will, transitively, add the specified
wildfly JAR into their classpath for compilation/testing/
distribution.
This is done for packaging and to offer that optional
speedup. It is not mandatory: applications importing
the hadoop-aws POM can exclude it if they choose.
1. Remove superfluous code
2. Remove superfluous comments
3. Checkstyle fixes
4. Remove methods that simply call super.method()
5. Use Java 8 facilities to streamline code where applicable
6. Simplify and unify some of the constructs between the two classes
7. Expanding of the arrays be 1.5x instead of 2x per expansion.
Contributed by Steve Loughran.
This moves the new API of HDFS-13616 into a interface which is implemented by
HDFS RPC filesystem client (not WebHDFS or any other connector)
This new interface, BatchListingOperations, is in hadoop-common,
so applications do not need to be compiled with HDFS on the classpath.
They must cast the FS into the interface.
instanceof can probe the client for having the new interface -the patch
also adds a new path capability to probe for this.
The FileSystem implementation is cut; tests updated as appropriate.
All new interfaces/classes/constants are marked as @unstable.
Change-Id: I5623c51f2c75804f58f915dd7e60cb2cffdac681
Contributed by Steve Loughran.
Not all stores do complete validation here; in particular the S3A
Connector does not: checking up the entire directory tree to see if a path matches
is a file significantly slows things down.
This check does take place in S3A mkdirs(), which walks backwards up the list of
parent paths until it finds a directory (success) or a file (failure).
In practice production applications invariably create destination directories
before writing 1+ file into them -restricting check purely to the mkdirs()
call deliver significant speed up while implicitly including the checks.
Change-Id: I2c9df748e92b5655232e7d888d896f1868806eb0
Followup to HADOOP-16885: Encryption zone file copy failure leaks a temp file
Moving the delete() call broke a mocking test, which slipped through the review process.
Contributed by Steve Loughran.
Change-Id: Ia13faf0f4fffb1c99ddd616d823e4f4d0b7b0cbb
Contributed by Xiaoyu Yao.
Contains HDFS-14892. Close the output stream if createWrappedOutputStream() fails
Copying file through the FsShell command into an HDFS encryption zone where
the caller lacks permissions is leaks a temp ._COPYING file
and potentially a wrapped stream unclosed.
This is a convergence of a fix for S3 meeting an issue in HDFS.
S3: a HEAD against a file can cache a 404,
-you must not do any existence checks, including deleteOnExit(),
until the file is written.
Hence: HADOOP-16490, only register files for deletion the create worked
and the upload is not direct.
HDFS-14892. HDFS doesn't close wrapped streams when IOEs are raised on
create() failures. Which means that an entry is retained on the NN.
-you need to register a file with deleteOnExit() even if the file wasn't
created.
This patch:
* Moves the deleteOnExit to ensure the created file get deleted cleanly.
* Fixes HDFS to close the wrapped stream on failures.
Followup to the main openFile().withStatus() patch.
It turns out that this broke the hive builds, which
was not well appreciated.
This patch lists places to review in the hadoop codebase,
and external projects where changes are likely to cause problems.
Contributed by Steve Loughran
Change-Id: Ifac815c65b74d083cd277764b780ac2b5b0f6b36
Contributed by Steve Loughran.
During S3A rename() and delete() calls, the list of objects delete is
built up into batches of a thousand and then POSTed in a single large
DeleteObjects request.
But as the IO capacity allowed on an S3 partition may only be 3500 writes
per second *and* each entry in that POST counts as a single write, then
one of those posts alone can trigger throttling on an already loaded
S3 directory tree. Which can trigger backoff and retry, with the same
thousand entry post, and so recreate the exact same problem.
Fixes
* Page size for delete object requests is set in
fs.s3a.bulk.delete.page.size; the default is 250.
* The property fs.s3a.experimental.aws.s3.throttling (default=true)
can be set to false to disable throttle retry logic in the AWS
client SDK -it is all handled in the S3A client. This
gives more visibility in to when operations are being throttled
* Bulk delete throttling events are logged to the log
org.apache.hadoop.fs.s3a.throttled log at INFO; if this appears
often then choose a smaller page size.
* The metric "store_io_throttled" adds the entire count of delete
requests when a single DeleteObjects request is throttled.
* A new quantile, "store_io_throttle_rate" can track throttling
load over time.
* DynamoDB metastore throttle resilience issues have also been
identified and fixed. Note: the fs.s3a.experimental.aws.s3.throttling
flag does not apply to DDB IO precisely because there may still be
lurking issues there and it safest to rely on the DynamoDB client
SDK.
Change-Id: I00f85cdd94fc008864d060533f6bd4870263fd84
This is a regression caused by HADOOP-16759.
The test TestHarFileSystem uses introspection to verify that HarFileSystem
Does not implement methods to which there is a suitable implementation in
the base FileSystem class. Because of the way it checks this, refactoring
(protected) FileSystem methods in an IDE do not automatically change
the probes in TestHarFileSystem.
The changes in HADOOP-16759 did exactly that, and somehow managed
to get through the build/test process without this being noticed.
This patch fixes that failure.
Caused by and fixed by Steve Loughran.
Change-Id: If60d9c97058242871c02ad1addd424478f84f446
Signed-off-by: Mingliang Liu <liuml07@apache.org>