Commit Graph

1875 Commits

Author SHA1 Message Date
PJ Fanning
fa9bb0d1ac
HADOOP-19231. Add JacksonUtil to manage Jackson classes (#6953)
New class org.apache.hadoop.util.JacksonUtil centralizes construction of
Jackson ObjectMappers and JsonFactories.

Contributed by PJ Fanning
2024-08-15 16:44:54 +01:00
Steve Loughran
55a576906d
HADOOP-19131. Assist reflection IO with WrappedOperations class (#6686)
1. The class WrappedIO has been extended with more filesystem operations

- openFile()
- PathCapabilities
- StreamCapabilities
- ByteBufferPositionedReadable

All these static methods raise UncheckedIOExceptions rather than
checked ones.

2. The adjacent class org.apache.hadoop.io.wrappedio.WrappedStatistics
provides similar access to IOStatistics/IOStatisticsContext classes
and operations.

Allows callers to:
* Get a serializable IOStatisticsSnapshot from an IOStatisticsSource or
  IOStatistics instance
* Save an IOStatisticsSnapshot to file
* Convert an IOStatisticsSnapshot to JSON
* Given an object which may be an IOStatisticsSource, return an object
  whose toString() value is a dynamically generated, human readable summary.
  This is for logging.
* Separate getters to the different sections of IOStatistics.
* Mean values are returned as a Map.Pair<Long, Long> of (samples, sum)
  from which means may be calculated.

There are examples of the dynamic bindings to these classes in:

org.apache.hadoop.io.wrappedio.impl.DynamicWrappedIO
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedStatistics

These use DynMethods and other classes in the package
org.apache.hadoop.util.dynamic which are based on the
Apache Parquet equivalents.
This makes re-implementing these in that library and others
which their own fork of the classes (example: Apache Iceberg)

3. The openFile() option "fs.option.openfile.read.policy" has
added specific file format policies for the core filetypes

* avro
* columnar
* csv
* hbase
* json
* orc
* parquet

S3A chooses the appropriate sequential/random policy as a 

A policy `parquet, columnar, vector, random, adaptive` will use the parquet policy for
any filesystem aware of it, falling back to the first entry in the list which
the specific version of the filesystem recognizes

4. New Path capability fs.capability.virtual.block.locations

Indicates that locations are generated client side
and don't refer to real hosts.

Contributed by Steve Loughran
2024-08-14 14:43:00 +01:00
Viraj Jasani
fa83c9a805
HADOOP-19072 S3A: Override fs.s3a.performance.flags for tests (ADDENDUM 2) (#6993)
Second followup to #6543; all hadoop-aws integration tests complete correctly even when 

fs.s3a.performance.flags = *

Contributed by Viraj Jasani
2024-08-14 10:57:44 +01:00
Viraj Jasani
74ff00705c
HADOOP-19072. S3A: Override fs.s3a.performance.flags for tests (ADDENDUM) (#6985)
This is a followup to #6543 which ensures all test pass in configurations where 
fs.s3a.performance.flags is set to "*" or contains "mkdirs"

Contributed by VJ Jasani
2024-08-12 14:16:44 +01:00
Viraj Jasani
321a6cc55e
HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.performance.flags" for mkdir (#6543)
If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.

This saves the cost of multiple list operations.

Contributed by Viraj Jasani
2024-08-08 17:48:51 +01:00
Steve Loughran
2cf4d638af
HADOOP-19245. S3ABlockOutputStream no longer sends progress events in close() (#6974)
Contributed by Steve Loughran
2024-08-02 16:01:03 +01:00
Steve Loughran
a5806a9e7b
HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags (#6789)
1. Configuration adds new method `getEnumSet()` to get a set of enum values from
   a configuration string.
   <E extends Enum<E>> EnumSet<E> getEnumSet(String key, Class<E> enumClass, boolean ignoreUnknown)

   Whitespace is ignored, case is ignored and the value "*" is mapped to "all values of the enum".
   If "ignoreUnknown" is true then when parsing, unknown values are ignored.
   This is recommended for forward compatiblity with later versions.

2. This support is implemented in org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
    elsewhere in the hadoop codebase.

3. A new private FlagSet class in hadoop common manages a set of enum flags.

     It implements StreamCapabilities and can be probed for a specific option being set
    (with a prefix)


S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum

* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
  fs.s3a.performance.flags in test setup.

Future performance flags are planned, with different levels of safety
and/or backwards compatibility.

Contributed by Steve Loughran
2024-07-29 11:33:51 +01:00
Raphael Azzolini
4525c7e35e
HADOOP-19197. S3A: Support AWS KMS Encryption Context (#6874)
The new property fs.s3a.encryption.context allow users to specify the AWS KMS Encryption Context to be used in S3A.

The value of the encryption context is a key/value string that will be Base64 encoded and set in the parameter ssekmsEncryptionContext from the S3 client.

Contributed by Raphael Azzolini
2024-07-23 17:09:04 +01:00
Pranav Saxena
b60497ff41
HADOOP-19120. ApacheHttpClient adaptation in ABFS. (#6633)
Apache httpclient 4.5.x is the new default implementation of http connections;
this supports a large configurable pool of connections along with
the ability to limit their lifespan.

The networking library can be chosen using the configuration
option fs.azure.networking.library

The supported values are
- APACHE_HTTP_CLIENT : Use Apache HttpClient [Default]
- JDK_HTTP_URL_CONNECTION : Use JDK networking library

Important: unless the networking library is switched back to
the JDK, the apache httpcore and httpclient must be on the classpath

Contributed by Pranav Saxena
2024-07-22 19:03:51 +01:00
Anuj Modi
51cb858cc8
HADOOP-19208: [ABFS] Fixing logic to determine HNS nature of account to avoid extra getAcl() calls (#6893) 2024-07-15 21:51:54 +05:30
Steve Loughran
4c55adbb6b
HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)
Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.

S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
  the services.
* Only create the async client as part of the transfer manager creation,
  which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
  in IOStatistics for reporting.

Adds to hadoop common class
  LazyAtomicReference<T> implements CallableRaisingIOE<T>, Supplier<T>
and subclass
  LazyAutoCloseableReference<T extends AutoCloseable>
    extends LazyAtomicReference<T> implements AutoCloseable

These evaluate the Supplier<T>/CallableRaisingIOE<T> they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.

These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.

LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set

This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.

Contributed by Steve Loughran.
2024-07-05 16:38:37 +01:00
Steve Loughran
c33d868606
HADOOP-19210. S3A: Speed up some slow unit tests (#6907)
Speed up slow tests
* TestS3AAWSCredentialsProvider: decrease thread pool shutdown time
* TestS3AInputStreamRetry: reduce retry limit and intervals

Contributed by Steve Loughran
2024-07-02 11:34:45 +01:00
HarshitGupta11
d3b98cb1b2
HADOOP-19194:Add test to find unshaded dependencies in the aws sdk (#6865)
The new test TestAWSV2SDK scans the aws sdk bundle.jar and prints out all classes
which are unshaded, so at risk of creating classpath problems

It does not fail the test if this holds, because the current SDKs
do ship with unshaded classes; the test would always fail.

The SDK upgrade process should include inspecting the output
of this test to see if it has got worse (do a before/after check).

Once the AWS SDK does shade everything, we can have this
test fail on any regression

Contributed by Harshit Gupta
2024-06-24 10:41:11 +01:00
Steve Loughran
2d5fa9e016
HADOOP-18508. S3A: Support parallel integration test runs on same bucket (#5081)
It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.

This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.

Example:

mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2

- Root tests must be be disabled to stop them cleaning up
  the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
  of the output of any interrupted test suites.  

Contributed by Steve Loughran
2024-06-14 19:34:52 +01:00
Anuj Modi
005030f7a0
HADOOP-18610: [ABFS] OAuth2 Token Provider support for Azure Workload Identity (#6787)
Add support for Azure Active Directory (Azure AD) workload identities which integrate with the Kubernetes's native capabilities to federate with any external identity provider.

Contributed By: Anuj Modi
2024-06-11 13:06:39 -05:00
Pranav Saxena
2e1deee87a
HADOOP-19137. [ABFS] Prevent ABFS initialization for non-hierarchal-namespace account if Customer-provided-key configs given. (#6752)
Customer-provided-keys (CPK) configs are not allowed with non-hierarchal-namespace (non-HNS) accounts for ABFS. This patch aims to prevent ABFS initialization for non-HNS accounts if CPK configs are provided.

Contributed by: Pranav Saxena
2024-06-10 15:03:41 -05:00
Steve Loughran
01d257d5aa
HADOOP-19189. ITestS3ACommitterFactory failing (#6857)
* parameterize the test run rather than do it from within the test suite.
* log what the committer factory is up to (and improve its logging)
* close all filesystems, then create the test filesystem with cache enabled.

The cache is critical, we want the fs from cache to be used when querying
filesystem properties, rather than one created from the committer jobconf,
which will have the same options as the task committer, so not actually
validate the override logic.

Contributed by Steve Loughran
2024-06-07 17:34:01 +01:00
Anuj Modi
bbb17e76a7
HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans for Hadoop-Azure (#6862)
Contributed by Anuj Modi
2024-06-07 14:28:24 +01:00
Mukund Thakur
f92a8ab8ae
HADOOP-19190. Skip ITestS3AEncryptionWithDefaultS3Settings.testEncryptionFileAttributes when bucket not encrypted with sse-kms (#6859)
Follow up of HADOOP-19190
2024-06-03 12:00:31 -05:00
Anuj Modi
d8b485a512
HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication (#6552)
Contributed by Anuj Modi
2024-05-30 20:46:19 +01:00
Steve Loughran
d00b3acd5e
HADOOP-18679. Followup: change method name case (#6854)
WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()

Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.

Contributed by Steve Loughran
2024-05-30 19:34:30 +01:00
Mukund Thakur
f4fde40524
HADOOP-19184. S3A Fix TestStagingCommitter.testJobCommitFailure (#6843)
Follow up on HADOOP-18679

Contributed by: Mukund Thakur
2024-05-28 11:27:33 -05:00
Anmol Asrani
d168d3ffee
HADOOP-18325: ABFS: Add correlated metric support for ABFS operations (#6314)
Adds support for metric collection at the filesystem instance level.
Metrics are pushed to the store upon the closure of a filesystem instance, encompassing all operations
that utilized that specific instance.

Collected Metrics:

- Number of successful requests without any retries.
- Count of requests that succeeded after a specified number of retries (x retries).
- Request count subjected to throttling.
- Number of requests that failed despite exhausting all retry attempts. etc.
Implementation Details:

Incorporated logic in the AbfsClient to facilitate metric pushing through an additional request.
This occurs in scenarios where no requests are sent to the backend for a defined idle period.
By implementing these enhancements, we ensure comprehensive monitoring and analysis of filesystem interactions, enabling a deeper understanding of success rates, retry scenarios, throttling instances, and exhaustive failure scenarios. Additionally, the AbfsClient logic ensures that metrics are proactively pushed even during idle periods, maintaining a continuous and accurate representation of filesystem performance.

Contributed by Anmol Asrani
2024-05-23 15:10:10 +01:00
Mukund Thakur
47be1ab3b6
HADOOP-18679. Add API for bulk/paged delete of files (#6726)
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.

This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.

Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.

The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.

To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.

Contributed by Mukund Thakur and Steve Loughran
2024-05-20 17:05:25 +01:00
Mukund Thakur
a97e3022de
HADOOP-19013. Adding x-amz-server-side-encryption-aws-kms-key-id in the get file attributes for S3A. (#6646)
Contributed by: Mukund Thakur
2024-05-15 11:54:54 -05:00
xuzifu666
cf9559eb27
HADOOP-19073 WASB: Fix connection leak in FolderRenamePending (#6534)
Contributed by xuyu
2024-05-15 14:38:06 +01:00
Steve Loughran
c9270600b7
MAPREDUCE-7474. Improve Manifest committer resilience (#6716)
Improve task commit resilience everywhere
and add an option to reduce delete IO requests on
job cleanup (relevant for ABFS and HDFS).

Task Commit Resilience
----------------------

Task manifest saving is re-attempted on failure; the number of 
attempts made is configurable with the option:

  mapreduce.manifest.committer.manifest.save.attempts

* The default is 5.
* The minimum is 1; asking for less is ignored.
* A retry policy adds 500ms of sleep per attempt.
* Move from classic rename() to commitFile() to rename the file,
  after calling getFileStatus() to get its length and possibly etag.
  This becomes a rename() on gcs/hdfs anyway, but on abfs it does reach
  the ResilientCommitByRename callbacks in abfs, which report on
  the outcome to the caller...which is then logged at WARN.
* New statistic task_stage_save_summary_file to distinguish from
  other saving operations (job success/report file).
  This is only saved to the manifest on task commit retries, and
  provides statistics on all previous unsuccessful attempts to save
  the manifests
+ test changes to match the codepath changes, including improvements
  in fault injection.

Directory size for deletion
---------------------------

New option

  mapreduce.manifest.committer.cleanup.parallel.delete.base.first

This attempts an initial attempt at deleting the base dir, only falling
back to parallel deletes if there's a timeout.

This option is disabled by default; Consider enabling it for abfs to
reduce IO load. Consult the documentation for more details.

Success file printing
---------------------

The command to print a JSON _SUCCESS file from this committer and
any S3A committer is now something which can be invoked from
the mapred command:

  mapred successfile <path to file>

Contributed by Steve Loughran
2024-05-13 21:12:34 +01:00
Viraj Jasani
a8a58944bd
HADOOP-19146. S3A: noaa-cors-pds test bucket access with global endpoint fails (#6723)
HADOOP-19057 switched the hadoop-aws test bucket from landsat-pds to 
noaa-cors-pds 

This new bucket isn't accessible if the client configuration
sets an fs.s3a.endpoint/region value other than us-east-1.

Contributed by Viraj Jasani
2024-04-30 12:16:36 +01:00
Anuj Modi
a6f2c4617e
HADOOP-19150: [ABFS] Fixing Test Code for ITestAbfsRestOperationException#testAuthFailException (#6756)
Contributed by: Anuj Modi
2024-04-29 11:48:34 -05:00
Xi Chen
aa169e1093
HADOOP-19159. S3A. Fix documentation of fs.s3a.committer.abort.pending.uploads (#6778)
The description of `fs.s3a.committer.abort.pending.uploads` in the section `Concurrent Jobs writing to the same destination` is not correct. Its default value is `true`.

Contributed by Xi Chen
2024-04-29 15:49:35 +01:00
Pranav Saxena
6404692c09
HADOOP-19102. [ABFS] FooterReadBufferSize should not be greater than readBufferSize (#6617)
Contributed by  Pranav Saxena
2024-04-22 18:36:12 +01:00
Anuj Modi
bd1a08b2cf
HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes (#6676)
Contributed by Anuj Modi
2024-04-12 17:52:47 +01:00
Anuj Modi
dbe2d61258
HADOOP-19096. [ABFS] [CST Optimization] Enhance Client-Side Throttling Metrics Logic (#6276)
ABFS has a client-side throttling mechanism which works on the metrics collected
from past requests

When requests are fail due to server-side throttling it updates its
metrics and recalculates any client side backoff.

The choice of which requests should be used to compute client side
backoff interval is based on the http status code:

- Status code in 2xx range: Successful Operations should contribute.
- Status code in 3xx range: Redirection Operations should not contribute.
- Status code in 4xx range: User Errors should not contribute.
- Status code is 503: Throttling Error should contribute only if they
  are due to client limits breach as follows:
  * 503, Ingress Over Account Limit: Should Contribute
  * 503, Egress Over Account Limit: Should Contribute
  * 503, TPS Over Account Limit: Should Contribute
  * 503, Other Server Throttling: Should not Contribute.
- Status code in 5xx range other than 503: Should not Contribute.
- IOException and UnknownHostExceptions: Should not Contribute.

Contributed by Anuj Modi
2024-04-10 14:46:23 +01:00
Anuj Modi
6ed73896f6
HADOOP-18656. [ABFS] Add Support for Paginated Delete for Large Directories in HNS Account (#6409)
Contributed by Anuj Modi
2024-04-04 19:48:25 +01:00
Dongjoon Hyun
d7157b4aa9
HADOOP-19141. Vector IO: Update default values consistently (#6702)
Contributed by Dongjoon Hyun
2024-04-04 10:56:40 +01:00
Steve Loughran
62182b1f74
HADOOP-19098. Vector IO: test failure followup (#6701)
Revert changes in ITestDelegatedMRJob which came in with HADOOP-19098

Contributed by Steve Loughran
2024-04-03 14:40:41 -05:00
Steve Loughran
87fb977777
HADOOP-19098. Vector IO: Specify and validate ranges consistently. #6604
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation

Contributed by Steve Loughran

Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
2024-04-03 13:17:52 +01:00
Steve Loughran
b4f9d8e6fa
Revert "HADOOP-19098. Vector IO: Specify and validate ranges consistently."
This reverts commit ba7faf90c8.
2024-04-03 13:15:05 +01:00
Steve Loughran
ba7faf90c8
HADOOP-19098. Vector IO: Specify and validate ranges consistently.
Clarifies behaviour of VectorIO methods with contract tests as well as specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.  

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation

Contributed by Steve Loughran
2024-04-02 20:16:38 +01:00
PJ Fanning
06db6289cb
HADOOP-19024. Use bouncycastle jdk18 1.77 (#6410). Contributed 2024-03-30 19:58:12 +05:30
PJ Fanning
97c5a6efba
HADOOP-19041. Use StandardCharsets in more places (#6449) 2024-03-28 23:17:18 -04:00
xiaojunxiang
8528d5783d
HDFS-17216. Distcp: When handle the small files, the bandwidth parameter will be invalid, fix this bug. (#6138) 2024-03-28 10:31:06 -04:00
Syed Shameerur Rahman
032796a0fb
HADOOP-19047: S3A: Support in-memory tracking of Magic Commit data (#6468)
If the option fs.s3a.committer.magic.track.commits.in.memory.enabled
is set to true, then rather than save data about in-progress uploads
to S3, this information is cached in memory.

If the number of files being committed is low, this will save network IO
in both the generation of .pending and marker files, and in the scanning
of task attempt directory trees during task commit.

Contributed by Syed Shameerur Rahman
2024-03-26 15:29:35 +00:00
Viraj Jasani
9fe371aa15
HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals (ADDENDUM) (#6546)
This is a followup to #6406:
HADOOP-18980. S3A credential provider remapping: make extensible

It adds extra validation of key-value pairs in a configuration
option, with tests.

Contributed by Viraj Jasani
2024-03-26 11:18:03 +00:00
Anuj Modi
c4fa1b65fb
HADOOP-19089: [ABFS] Reverting Back Support of setXAttr() and getXAttr() on root path (#6592)
This reverts most of
HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003).

Calling getXAttr("/") or setXAttr("/") on an abfs container will fail with

`Operation failed: "The request URI is invalid.", HTTP 400 Bad Request`

 
This change is to ensure:
* Consistency across ADLS clients
* Consistency across authentication mechanisms.

Contributed by Anuj Modi
2024-03-25 14:13:24 +00:00
Adnan Hemani
8b2058a4e7
HADOOP-19050. S3A: Support S3 Access Grants (#6544)
This adds support for Amazon S3 Access Grants to the S3A connector.

For more information, see:
* https://aws.amazon.com/s3/features/access-grants/
* https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/

Contributed by Adnan Hemani
2024-03-19 17:49:51 +00:00
drankye
4d88f9892a HADOOP-19085. Compatibility Benchmark over HCFS Implementations
Contributed by Han Liu
2024-03-17 16:48:29 +08:00
Viraj Jasani
a325876fec
HADOOP-19066. Run FIPS test for valid bucket locations (ADDENDUM) (#6624)
FIPS is only supported in north america AWS regions; relevant tests in
ITestS3AEndpointRegion are skipped for buckets with different endpoints/regions.
2024-03-13 13:21:50 +00:00
Viraj Jasani
44c14edac7
HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint (#6539)
Contributed by Viraj Jasani
2024-03-12 18:49:06 +00:00
Steve Loughran
bb32aec88e
HADOOP-19043. S3A: Regression: ITestS3AOpenCost fails on prefetch test runs (#6465)
Disables the new tests added in:

HADOOP-19027. S3A: S3AInputStream doesn't recover from HTTP/channel exceptions #6425

The underlying issue here is that the block prefetch code can identify
when there's a mismatch between declared and actual length, and doesn't
store any of the incomplete buffer.

This should be addressed in HADOOP-18184.

Contributed by Steve Loughran
2024-03-08 12:48:38 +00:00