Fixes read-ahead buffer management issues introduced by HADOOP-16852,
"ABFS: Send error back to client for Read Ahead request failure".
Contributed by Sneha Vijayarajan
Contributed by Sneha Vijayarajan
DETAILS:
This change adds config key "fs.azure.enable.conditional.create.overwrite" with
a default of true. When enabled, if create(path, overwrite: true) is invoked
and the file exists, the ABFS driver will first obtain its etag and then attempt
to overwrite the file on the condition that the etag matches. The purpose of this
is to mitigate the non-idempotency of this method. Specifically, in the event of
a network error or similar, the client will retry and this can result in the file
being created more than once which may result in data loss. In essense this is
like a poor man's file handle, and will be addressed more thoroughly in the future
when support for lease is added to ABFS.
TEST RESULTS:
namespace.enabled=true
auth.type=SharedKey
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 42
Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=true
auth.type=OAuth
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 87, Failures: 0, Errors: 0, Skipped: 0
Tests run: 457, Failures: 0, Errors: 0, Skipped: 74
Tests run: 207, Failures: 0, Errors: 0, Skipped: 140
Adds the options to control the size of the per-output-stream threadpool
when writing data through the abfs connector
* fs.azure.write.max.concurrent.requests
* fs.azure.write.max.requests.to.queue
Contributed by Bilahari T H
Contributed by Thomas Marquardt
DETAILS: WASB depends on the Azure Storage Java SDK. There is a concurrency
bug in the Azure Storage Java SDK that can cause the results of a list blobs
operation to appear empty. This causes the Filesystem listStatus and similar
APIs to return empty results. This has been seen in Spark work loads when jobs
use more than one executor core.
See Azure/azure-storage-java#546 for details on the bug in the Azure Storage SDK.
TESTS: A new test was added to validate the fix. All tests are passing:
wasb:
mvn -T 1C -Dparallel-tests=wasb -Dscale -DtestsThreadCount=8 clean verify
Tests run: 248, Failures: 0, Errors: 0, Skipped: 11
Tests run: 651, Failures: 0, Errors: 0, Skipped: 65
abfs:
mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 64, Failures: 0, Errors: 0, Skipped: 0
Tests run: 437, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
Contributed by Thomas Marquardt.
DETAILS:
1) The authentication version in the service has been updated from Dec19 to Feb20, so need to update the client.
2) Add support and test cases for getXattr and setXAttr.
3) Update DelegationSASGenerator and related to use Duration instead of int for time periods.
4) Cleanup DelegationSASGenerator switch/case statement that maps operations to permissions.
5) Cleanup SASGenerator classes to use String.equals instead of ==.
TESTS:
Added tests for getXAttr and setXAttr.
All tests are passing against my account in eastus2euap:
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 76, Failures: 0, Errors: 0, Skipped: 0
Tests run: 441, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
Contributed by Thomas Marquardt.
DETAILS:
Previously we had a SASGenerator class which generated Service SAS, but we need to add DelegationSASGenerator.
I separated SASGenerator into a base class and two subclasses ServiceSASGenerator and DelegationSASGenreator. The
code in ServiceSASGenerator is copied from SASGenerator but the DelegationSASGenrator code is new. The
DelegationSASGenerator code demonstrates how to use Delegation SAS with minimal permissions, as would be used
by an authorization service such as Apache Ranger. Adding this to the tests helps us lock in this behavior.
Added a MockDelegationSASTokenProvider for testing User Delegation SAS.
Fixed the ITestAzureBlobFileSystemCheckAccess tests to assume oauth client ID so that they are ignored when that
is not configured.
To improve performance, AbfsInputStream/AbfsOutputStream re-use SAS tokens until the expiry is within 120 seconds.
After this a new SAS will be requested. The default period of 120 seconds can be changed using the configuration
setting "fs.azure.sas.token.renew.period.for.streams".
The SASTokenProvider operation names were updated to correspond better with the ADLS Gen2 REST API, since these
operations must be provided tokens with appropriate SAS parameters to succeed.
Support for the version 2.0 AAD authentication endpoint was added to AzureADAuthenticator.
The getFileStatus method was mistakenly calling the ADLS Gen2 Get Properties API which requires read permission
while the getFileStatus call only requires execute permission. ADLS Gen2 Get Status API is supposed to be used
for this purpose, so the underlying AbfsClient.getPathStatus API was updated with a includeProperties
parameter which is set to false for getFileStatus and true for getXAttr.
Added SASTokenProvider support for delete recursive.
Fixed bugs in AzureBlobFileSystem where public methods were not validating the Path by calling makeQualified. This is
necessary to avoid passing null paths and to convert relative paths into absolute paths.
Canonicalized the path used for root path internally so that root path can be used with SAS tokens, which requires
that the path in the URL and the path in the SAS token match. Internally the code was using
"//" instead of "/" for the root path, sometimes. Also related to this, the AzureBlobFileSystemStore.getRelativePath
API was updated so that we no longer remove and then add back a preceding forward / to paths.
To run ITestAzureBlobFileSystemDelegationSAS tests follow the instructions in testing_azure.md under the heading
"To run Delegation SAS test cases". You also need to set "fs.azure.enable.check.access" to true.
TEST RESULTS:
namespace.enabled=true
auth.type=SharedKey
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 41
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=false
auth.type=SharedKey
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 244
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=true
auth.type=SharedKey
sas.token.provider.type=MockDelegationSASTokenProvider
enable.check.access=true
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 0, Skipped: 33
Tests run: 206, Failures: 0, Errors: 0, Skipped: 24
namespace.enabled=true
auth.type=OAuth
-------------------
$mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
Tests run: 63, Failures: 0, Errors: 0, Skipped: 0
Tests run: 432, Failures: 0, Errors: 1, Skipped: 74
Tests run: 206, Failures: 0, Errors: 0, Skipped: 140
Contributed by: Mehakmeet Singh
In some cases, ABFS-prefetch thread runs in the background which returns some bytes from the buffer and gives an extra readOp. Thus, making readOps values arbitrary and giving intermittent failures in some cases. Hence, readOps values of 2 or 3 are seen in different setups.