Addresses the problem of processes running out of memory when
there are many ABFS output streams queuing data to upload,
especially when the network upload bandwidth is less than the rate
data is generated.
ABFS Output streams now buffer their blocks of data to
"disk", "bytebuffer" or "array", as set in
"fs.azure.data.blocks.buffer"
When buffering via disk, the location for temporary storage
is set in "fs.azure.buffer.dir"
For safe scaling: use "disk" (default); for performance, when
confident that upload bandwidth will never be a bottleneck,
experiment with the memory options.
The number of blocks a single stream can have queued for uploading
is set in "fs.azure.block.upload.active.blocks".
The default value is 20.
Contributed by Mehakmeet Singh.
* HDFS-16129. Fixing the signature secret file misusage in HttpFS.
The signature secret file was not used in HttpFs.
- if the configuration did not contain the deprecated
httpfs.authentication.signature.secret.file option then it
used the random secret provider
- if both option (httpfs. and hadoop.http.) was set then
the HttpFSAuthenticationFilter could not read the file
because the file path was not substituted properly
!NOTE! behavioral change: the deprecated httpfs. configuration
values are overwritten with the hadoop.http. values.
The commit also contains a follow up change to the YARN-10814,
empty secret files will result in a random secret provider.
Co-authored-by: Tamas Domok <tdomok@cloudera.com>
This adds a new class org.apache.hadoop.util.Preconditions which is
* @Private/@Unstable
* Intended to allow us to move off Google Guava
* Is designed to be trivially backportable
(i.e contains no references to guava classes internally)
Please use this instead of the guava equivalents, where possible.
Contributed by: Ahmed Hussein
Change-Id: Ic392451bcfe7d446184b7c995734bcca8c07286e
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.
fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key
The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.
(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)
Contributed by: Mehakmeet Singh
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.
fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key
The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.
(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)
Contributed by: Mehakmeet Singh
* Router to support resolving monitored namenodes with DNS
* Style
* fix style and test failure
* Add test for NNHAServiceTarget const
* Resolve comments
* Fix test
* Comments and style
* Create a simple function to extract port
* Use LambdaTestUtils.intercept
* fix javadoc
* Trigger Build
* CredentialProviderFactory to detect and report on recursion.
* S3AFS to remove incompatible providers.
* Integration Test for this.
Contributed by Steve Loughran.
Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is not loaded. So, without Hadoop native codec installed, saving SequenceFile using GzipCodec will throw exception like "SequenceFile doesn't work with GzipCodec without native-hadoop code!"
Same as other codecs which we migrated to using prepared packages (lz4, snappy), it will be better if we support GzipCodec generally without Hadoop native codec installed. Similar to BuiltInGzipDecompressor, we can use Java Deflater to support BuiltInGzipCompressor.
Fixes the regression caused by HADOOP-17511 by moving where the
option fs.s3a.acl.default is read -doing it before the RequestFactory
is created.
Adds
* A unit test in TestRequestFactory to verify the ACLs are set
on all file write operations.
* A new ITestS3ACannedACLs test which verifies that ACLs really
do get all the way through.
* S3A Assumed Role delegation tokens to include the IAM permission
s3:PutObjectAcl in the generated role.
Contributed by Steve Loughran
This patch cuts down the size of directory trees used for
distcp contract tests against object stores, so making
them much faster against distant/slow stores.
On abfs, the test only runs with -Dscale (as was the case for s3a already),
and has the larger scale test timeout.
After every test case, the FileSystem IOStatistics are logged,
to provide information about what IO is taking place and
what it's performance is.
There are some test cases which upload files of 1+ MiB; you can
increase the size of the upload in the option
"scale.test.distcp.file.size.kb"
Set it to zero and the large file tests are skipped.
Contributed by Steve Loughran.
This work
* Defines the behavior of FileSystem.copyFromLocal in filesystem.md
* Implements a high performance implementation of copyFromLocalOperation
for S3
* Adds a contract test for the operation: AbstractContractCopyFromLocalTest
* Implements the contract tests for Local and S3A FileSystems
Contributed by: Bogdan Stolojan
The rest endpoint would be unusable with an empty secret file
(throwing IllegalArgumentExceptions).
Any IO error would have resulted in the same fallback path.
Co-authored-by: Tamas Domok <tdomok@cloudera.com>