Addresses transient failures in the following test classes:
* ITestAbfsStreamStatistics: Uses a filesystem level static instance to record read/write statistics, which also tracks these operations in other tests running in parallel. Marked for sequential-only run to avoid transient failure
* ITestAbfsRestOperationException: The use of a static member to track retry count causes transient failures when two tests of this class happen to run together. Switch to non-static variable for assertions on retry count
closes#3341
Contributed by Sumangala Patki
Change-Id: Ied4dec35c81e94efe5f999acae4bb8fde278202e
This switches the default behavior of S3A output streams
to warning that Syncable.hsync() or hflush() have been
called; it's not considered an error unless the defaults
are overridden.
This avoids breaking applications which call the APIs,
at the risk of people trying to use S3 as a safe store
of streamed data (HBase WALs, audit logs etc).
Contributed by Steve Loughran.
Change-Id: I0a02ec1e622343619f147f94158c18928a73a885
The ordering of the resolution of new and deprecated s3a encryption options
& secrets is the same when JCEKS and other hadoop credentials stores are used
to store them as when they are in XML files: per-bucket settings always take
priority over global values, even when the bucket-level options use the
old option names.
Contributed by Mehakmeet Singh and Steve Loughran
Change-Id: I871672071efa2eb6b600cb2658fceeef57f658a3
This migrates the fs.s3a-server-side encryption configuration options
to a name which covers client-side encryption too.
fs.s3a.server-side-encryption-algorithm becomes fs.s3a.encryption.algorithm
fs.s3a.server-side-encryption.key becomes fs.s3a.encryption.key
The existing keys remain valid, simply deprecated and remapped
to the new values. If you want server-side encryption options
to be picked up regardless of hadoop versions, use
the old keys.
(the old key also works for CSE, though as no version of Hadoop
with CSE support has shipped without this remapping, it's less
relevant)
Contributed by: Mehakmeet Singh
Change-Id: I51804b21b287dbce18864f0a6ad17126aba2b281
S3A S3Guard tests to skip if S3-CSE are enabled (#3263)
Follow on to
* HADOOP-13887. Encrypt S3A data client-side with AWS SDK (S3-CSE)
If the S3A bucket is set up to use S3-CSE encryption, all tests which turn
on S3Guard are skipped, so they don't raise any exceptions about
incompatible configurations.
Contributed by Mehakmeet Singh
Change-Id: I9f4188109b56a1f4e5a31fae265d980c5795db1e
This (big!) patch adds support for client side encryption in AWS S3,
with keys managed by AWS-KMS.
Read the documentation in encryption.md very, very carefully before
use and consider it unstable.
S3-CSE is enabled in the existing configuration option
"fs.s3a.server-side-encryption-algorithm":
fs.s3a.server-side-encryption-algorithm=CSE-KMS
fs.s3a.server-side-encryption.key=<KMS_KEY_ID>
You cannot enable CSE and SSE in the same client, although
you can still enable a default SSE option in the S3 console.
* Filesystem list/get status operations subtract 16 bytes from the length
of all files >= 16 bytes long to compensate for the padding which CSE
adds.
* The SDK always warns about the specific algorithm chosen being
deprecated. It is critical to use this algorithm for ranged
GET requests to work (i.e. random IO). Ignore.
* Unencrypted files CANNOT BE READ.
The entire bucket SHOULD be encrypted with S3-CSE.
* Uploading files may be a bit slower as blocks are now
written sequentially.
* The Multipart Upload API is disabled when S3-CSE is active.
Contributed by Mehakmeet Singh
Change-Id: Ie1a27a036a39db66a67e9c6d33bc78d54ea708a0
Co-authored-by: Tao Yang <taoyang1@apache.org>
Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
(cherry picked from commit 10b79a26fe)
Support building on Apple Silicon with ARM CPUs by using the x86_64 version of protoc.
Contributed by Dongjoon Hyun
Change-Id: I4b8330098822f1fd28f0113650eb709d53bcc690