Completely removes S3Guard support from the S3A codebase.
If the connector is configured to use any metastore other than
the null and local stores (i.e. DynamoDB is selected) the s3a client
will raise an exception and refuse to initialize.
This is to ensure that there is no mix of S3Guard enabled and disabled
deployments with the same configuration but different hadoop releases
-it must be turned off completely.
The "hadoop s3guard" command has been retained -but the supported
subcommands have been reduced to those which are not purely S3Guard
related: "bucket-info" and "uploads".
This is major change in terms of the number of files
changed; before cherry picking subsequent s3a patches into
older releases, this patch will probably need backporting
first.
Goodbye S3Guard, your work is done. Time to die.
Contributed by Steve Loughran.
Cut modtime-based rename recovery as object modification time
is not updated during rename operation.
Applications will have to use etag API of HADOOP-17979
and implement it themselves.
Why not do the HEAD and etag recovery in ABFS client?
Cuts the IO capacity in half so kills job commit performance.
The manifest committer of MAPREDUCE-7341 will do this recovery
and act as the reference implementation of the algorithm.
Contributed by: Steve Loughran
Addresses transient failures in the following test classes:
* ITestAbfsStreamStatistics: Uses a filesystem level static instance to record read/write statistics, which also tracks these operations in other tests running in parallel. Marked for sequential-only run to avoid transient failure
* ITestAbfsRestOperationException: The use of a static member to track retry count causes transient failures when two tests of this class happen to run together. Switch to non-static variable for assertions on retry count
closes#3341
Contributed by Sumangala Patki
Addresses the problem of processes running out of memory when
there are many ABFS output streams queuing data to upload,
especially when the network upload bandwidth is less than the rate
data is generated.
ABFS Output streams now buffer their blocks of data to
"disk", "bytebuffer" or "array", as set in
"fs.azure.data.blocks.buffer"
When buffering via disk, the location for temporary storage
is set in "fs.azure.buffer.dir"
For safe scaling: use "disk" (default); for performance, when
confident that upload bandwidth will never be a bottleneck,
experiment with the memory options.
The number of blocks a single stream can have queued for uploading
is set in "fs.azure.block.upload.active.blocks".
The default value is 20.
Contributed by Mehakmeet Singh.
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.
fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key
The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.
(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)
Contributed by: Mehakmeet Singh
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.
On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.
After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.
There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb"
Set it to zero and the large file tests are skipped.
Contributed by Steve Loughran.
Introducing fs.azure.readahead.range parameter which can be set by the user.
Data will be populated in buffer for random reads as well which leads to fewer
remote calls.
This patch also changes the seek implementation to perform a lazy seek. The
actual seek is done when a read is initiated and data is not present in the buffer else
data is returned from the buffer thus reducing the number of remote storage calls.
Contributed By: Mukund Thakur
When the S3A and ABFS filesystems are closed,
their IOStatistics are logged at debug in the log:
org.apache.hadoop.fs.statistics.IOStatisticsLogging
Set `fs.iostatistics.logging.level` to `info` for the statistics
to be logged at info. (also: `warn` or `error` for even higher
log levels).
Contributed by: Mehakmeet Singh
The ABFS Filesystem and its input and output streams now implement
the IOStatisticSource interface and provide IOStatistics on
their interactions with Azure Storage.
This includes the min/max/mean durations of all REST API calls.
Contributed by Mehakmeet Singh <mehakmeet.singh@cloudera.com>
This moves the mock account name --which is required to never exist-- from
"mockAccount" to an account name containing a static UUID.
Contributed by Steve Loughran.
* HADOOP-16948. Support single writer dirs.
* HADOOP-16948. Fix findbugs and checkstyle problems.
* HADOOP-16948. Fix remaining checkstyle problems.
* HADOOP-16948. Add DurationInfo, retry policy for acquiring lease, and javadocs
* HADOOP-16948. Convert ABFS client to use an executor for lease ops
* HADOOP-16948. Fix ABFS lease test for non-HNS
* HADOOP-16948. Fix checkstyle and javadoc
* HADOOP-16948. Address review comments
* HADOOP-16948. Use daemon threads for ABFS lease ops
* HADOOP-16948. Make lease duration configurable
* HADOOP-16948. Add error messages to test assertions
* HADOOP-16948. Remove extra isSingleWriterKey call
* HADOOP-16948. Use only infinite lease duration due to cost of renewal ops
* HADOOP-16948. Remove acquire/renew/release lease methods
* HADOOP-16948. Rename single writer dirs to infinite lease dirs
* HADOOP-16948. Fix checkstyle
* HADOOP-16948. Wait for acquire lease future
* HADOOP-16948. Add unit test for acquire lease failure
Removed findbugs from the hadoop build images and added spotbugs instead.
Upgraded SpotBugs to 4.2.2 and spotbugs-maven-plugin to 4.2.0.
Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
Use spotbugs instead of findbugs. Removed findbugs from the hadoop build images,
and added spotbugs in the images instead.
Reviewed-by: Masatake Iwasaki <iwasakims@apache.org>
Reviewed-by: Inigo Goiri <inigoiri@apache.org>
Reviewed-by: Dinesh Chitlangia <dineshc@apache.org>
This defines what output streams and especially those which implement
Syncable are meant to do, and documents where implementations (HDFS; S3)
don't. With tests.
The file:// FileSystem now supports Syncable if an application calls
FileSystem.setWriteChecksum(false) before creating a file -checksumming
and Syncable.hsync() are incompatible.
Contributed by Steve Loughran.
The ABFS connector now implements listStatusIterator() with
asynchronous prefetching of the next page(s) of results.
For listing large directories this can provide tangible speedups.
If for any reason this needs to be disabled, set
fs.azure.enable.abfslistiterator to false.
Contributed by Bilahari T H.
When 403 is returned from an ABFS HTTP call, an AccessDeniedException is raised.
The exception text is unchanged, for any application string matching on the getMessage() contents.
Contributed by Steve Loughran.
DETAILS:
The previous commit for HADOOP-17397 was not the correct fix. DelegationSASGenerator.getDelegationSAS
should return sp=p for the set-permission and set-acl operations. The tests have also been updated as
follows:
1. When saoid and suoid are not specified, skoid must have an RBAC role assignment which grants
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/modifyPermissions/action and sp=p
to set permissions or set ACL.
2. When saoid or suiod is specified, same as 1) but furthermore the saoid or suoid must be an owner of
the file or directory in order for the operation to succeed.
3. When saoid or suiod is specified, the ownership check is bypassed by also including 'o' (ownership)
in the SAS permission (for example, sp=op). Note that 'o' grants the saoid or suoid the ability to
change the file or directory owner to themself, and they can also change the owning group. Generally
speaking, if a trusted authorizer would like to give a user the ability to change the permissions or
ACL, then that user should be the file or directory owner.
TEST RESULTS:
namespace.enabled=true
auth.type=SharedKey
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 24
Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=true
auth.type=OAuth
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
Tests run: 462, Failures: 0, Errors: 0, Skipped: 70
Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
Fixes read-ahead buffer management issues introduced by HADOOP-16852,
"ABFS: Send error back to client for Read Ahead request failure".
Contributed by Sneha Vijayarajan
Contributed by Sneha Vijayarajan
DETAILS:
This change adds config key "fs.azure.enable.conditional.create.overwrite" with
a default of true. When enabled, if create(path, overwrite: true) is invoked
and the file exists, the ABFS driver will first obtain its etag and then attempt
to overwrite the file on the condition that the etag matches. The purpose of this
is to mitigate the non-idempotency of this method. Specifically, in the event of
a network error or similar, the client will retry and this can result in the file
being created more than once which may result in data loss. In essense this is
like a poor man's file handle, and will be addressed more thoroughly in the future
when support for lease is added to ABFS.
TEST RESULTS:
namespace.enabled=true
auth.type=SharedKey
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 42
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=true
auth.type=OAuth
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
Adds the options to control the size of the per-output-stream threadpool
when writing data through the abfs connector
* fs.azure.write.max.concurrent.requests
* fs.azure.write.max.requests.to.queue
Contributed by Bilahari T H
Contributed by Thomas Marquardt
DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency
bug in the Azure Storage Java SDK that can cause the results of a list blobs
operation to appear empty. This causes the Filesystem listStatus and similar
APIs to return empty results. This has been seen in Spark work loads when jobs
use more than one executor core.
See Azure/azure-storage-java#546 for details on the bug in the Azure Storage SDK.
TESTS: A new test was added to validate the fix. All tests are passing:
wasb:
mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify
Tests run: 248, Failures: 0, Errors: 0, Skipped: 11
Tests run: 651, Failures: 0, Errors: 0, Skipped: 65
abfs:
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0
Tests run: 437, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
Contributed by Thomas Marquardt.
DETAILS:
1) The authentication version in the service has been updated from Dec19 to Feb20, so need to update the client.
2) Add support and test cases for getXattr and setXAttr.
3) Update DelegationSASGenerator and related to use Duration instead of int for time periods.
4) Cleanup DelegationSASGenerator switch/case statement that maps operations to permissions.
5) Cleanup SASGenerator classes to use String.equals instead of ==.
TESTS:
Added tests for getXAttr and setXAttr.
All tests are passing against my account in eastus2euap:
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 76, Failures: 0, Errors: 0, Skipped: 0
Tests run: 441, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24