Commit Graph

1854 Commits

Author SHA1 Message Date
Mukund Thakur
f4fde40524
HADOOP-19184. S3A Fix TestStagingCommitter.testJobCommitFailure (#6843)
Follow up on HADOOP-18679

Contributed by: Mukund Thakur
2024-05-28 11:27:33 -05:00
Anmol Asrani
d168d3ffee
HADOOP-18325: ABFS: Add correlated metric support for ABFS operations (#6314)
Adds support for metric collection at the filesystem instance level.
Metrics are pushed to the store upon the closure of a filesystem instance, encompassing all operations
that utilized that specific instance.

Collected Metrics:

- Number of successful requests without any retries.
- Count of requests that succeeded after a specified number of retries (x retries).
- Request count subjected to throttling.
- Number of requests that failed despite exhausting all retry attempts. etc.
Implementation Details:

Incorporated logic in the AbfsClient to facilitate metric pushing through an additional request.
This occurs in scenarios where no requests are sent to the backend for a defined idle period.
By implementing these enhancements, we ensure comprehensive monitoring and analysis of filesystem interactions, enabling a deeper understanding of success rates, retry scenarios, throttling instances, and exhaustive failure scenarios. Additionally, the AbfsClient logic ensures that metrics are proactively pushed even during idle periods, maintaining a continuous and accurate representation of filesystem performance.

Contributed by Anmol Asrani
2024-05-23 15:10:10 +01:00
Mukund Thakur
47be1ab3b6
HADOOP-18679. Add API for bulk/paged delete of files (#6726)
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.

This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.

Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.

The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.

To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.

Contributed by Mukund Thakur and Steve Loughran
2024-05-20 17:05:25 +01:00
Mukund Thakur
a97e3022de
HADOOP-19013. Adding x-amz-server-side-encryption-aws-kms-key-id in the get file attributes for S3A. (#6646)
Contributed by: Mukund Thakur
2024-05-15 11:54:54 -05:00
xuzifu666
cf9559eb27
HADOOP-19073 WASB: Fix connection leak in FolderRenamePending (#6534)
Contributed by xuyu
2024-05-15 14:38:06 +01:00
Steve Loughran
c9270600b7
MAPREDUCE-7474. Improve Manifest committer resilience (#6716)
Improve task commit resilience everywhere
and add an option to reduce delete IO requests on
job cleanup (relevant for ABFS and HDFS).

Task Commit Resilience
----------------------

Task manifest saving is re-attempted on failure; the number of 
attempts made is configurable with the option:

  mapreduce.manifest.committer.manifest.save.attempts

* The default is 5.
* The minimum is 1; asking for less is ignored.
* A retry policy adds 500ms of sleep per attempt.
* Move from classic rename() to commitFile() to rename the file,
  after calling getFileStatus() to get its length and possibly etag.
  This becomes a rename() on gcs/hdfs anyway, but on abfs it does reach
  the ResilientCommitByRename callbacks in abfs, which report on
  the outcome to the caller...which is then logged at WARN.
* New statistic task_stage_save_summary_file to distinguish from
  other saving operations (job success/report file).
  This is only saved to the manifest on task commit retries, and
  provides statistics on all previous unsuccessful attempts to save
  the manifests
+ test changes to match the codepath changes, including improvements
  in fault injection.

Directory size for deletion
---------------------------

New option

  mapreduce.manifest.committer.cleanup.parallel.delete.base.first

This attempts an initial attempt at deleting the base dir, only falling
back to parallel deletes if there's a timeout.

This option is disabled by default; Consider enabling it for abfs to
reduce IO load. Consult the documentation for more details.

Success file printing
---------------------

The command to print a JSON _SUCCESS file from this committer and
any S3A committer is now something which can be invoked from
the mapred command:

  mapred successfile <path to file>

Contributed by Steve Loughran
2024-05-13 21:12:34 +01:00
Viraj Jasani
a8a58944bd
HADOOP-19146. S3A: noaa-cors-pds test bucket access with global endpoint fails (#6723)
HADOOP-19057 switched the hadoop-aws test bucket from landsat-pds to 
noaa-cors-pds 

This new bucket isn't accessible if the client configuration
sets an fs.s3a.endpoint/region value other than us-east-1.

Contributed by Viraj Jasani
2024-04-30 12:16:36 +01:00
Anuj Modi
a6f2c4617e
HADOOP-19150: [ABFS] Fixing Test Code for ITestAbfsRestOperationException#testAuthFailException (#6756)
Contributed by: Anuj Modi
2024-04-29 11:48:34 -05:00
Xi Chen
aa169e1093
HADOOP-19159. S3A. Fix documentation of fs.s3a.committer.abort.pending.uploads (#6778)
The description of `fs.s3a.committer.abort.pending.uploads` in the section `Concurrent Jobs writing to the same destination` is not correct. Its default value is `true`.

Contributed by Xi Chen
2024-04-29 15:49:35 +01:00
Pranav Saxena
6404692c09
HADOOP-19102. [ABFS] FooterReadBufferSize should not be greater than readBufferSize (#6617)
Contributed by  Pranav Saxena
2024-04-22 18:36:12 +01:00
Anuj Modi
bd1a08b2cf
HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes (#6676)
Contributed by Anuj Modi
2024-04-12 17:52:47 +01:00
Anuj Modi
dbe2d61258
HADOOP-19096. [ABFS] [CST Optimization] Enhance Client-Side Throttling Metrics Logic (#6276)
ABFS has a client-side throttling mechanism which works on the metrics collected
from past requests

When requests are fail due to server-side throttling it updates its
metrics and recalculates any client side backoff.

The choice of which requests should be used to compute client side
backoff interval is based on the http status code:

- Status code in 2xx range: Successful Operations should contribute.
- Status code in 3xx range: Redirection Operations should not contribute.
- Status code in 4xx range: User Errors should not contribute.
- Status code is 503: Throttling Error should contribute only if they
  are due to client limits breach as follows:
  * 503, Ingress Over Account Limit: Should Contribute
  * 503, Egress Over Account Limit: Should Contribute
  * 503, TPS Over Account Limit: Should Contribute
  * 503, Other Server Throttling: Should not Contribute.
- Status code in 5xx range other than 503: Should not Contribute.
- IOException and UnknownHostExceptions: Should not Contribute.

Contributed by Anuj Modi
2024-04-10 14:46:23 +01:00
Anuj Modi
6ed73896f6
HADOOP-18656. [ABFS] Add Support for Paginated Delete for Large Directories in HNS Account (#6409)
Contributed by Anuj Modi
2024-04-04 19:48:25 +01:00
Dongjoon Hyun
d7157b4aa9
HADOOP-19141. Vector IO: Update default values consistently (#6702)
Contributed by Dongjoon Hyun
2024-04-04 10:56:40 +01:00
Steve Loughran
62182b1f74
HADOOP-19098. Vector IO: test failure followup (#6701)
Revert changes in ITestDelegatedMRJob which came in with HADOOP-19098

Contributed by Steve Loughran
2024-04-03 14:40:41 -05:00
Steve Loughran
87fb977777
HADOOP-19098. Vector IO: Specify and validate ranges consistently. #6604
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation

Contributed by Steve Loughran

Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
2024-04-03 13:17:52 +01:00
Steve Loughran
b4f9d8e6fa
Revert "HADOOP-19098. Vector IO: Specify and validate ranges consistently."
This reverts commit ba7faf90c8.
2024-04-03 13:15:05 +01:00
Steve Loughran
ba7faf90c8
HADOOP-19098. Vector IO: Specify and validate ranges consistently.
Clarifies behaviour of VectorIO methods with contract tests as well as specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.  

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation

Contributed by Steve Loughran
2024-04-02 20:16:38 +01:00
PJ Fanning
06db6289cb
HADOOP-19024. Use bouncycastle jdk18 1.77 (#6410). Contributed 2024-03-30 19:58:12 +05:30
PJ Fanning
97c5a6efba
HADOOP-19041. Use StandardCharsets in more places (#6449) 2024-03-28 23:17:18 -04:00
xiaojunxiang
8528d5783d
HDFS-17216. Distcp: When handle the small files, the bandwidth parameter will be invalid, fix this bug. (#6138) 2024-03-28 10:31:06 -04:00
Syed Shameerur Rahman
032796a0fb
HADOOP-19047: S3A: Support in-memory tracking of Magic Commit data (#6468)
If the option fs.s3a.committer.magic.track.commits.in.memory.enabled
is set to true, then rather than save data about in-progress uploads
to S3, this information is cached in memory.

If the number of files being committed is low, this will save network IO
in both the generation of .pending and marker files, and in the scanning
of task attempt directory trees during task commit.

Contributed by Syed Shameerur Rahman
2024-03-26 15:29:35 +00:00
Viraj Jasani
9fe371aa15
HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals (ADDENDUM) (#6546)
This is a followup to #6406:
HADOOP-18980. S3A credential provider remapping: make extensible

It adds extra validation of key-value pairs in a configuration
option, with tests.

Contributed by Viraj Jasani
2024-03-26 11:18:03 +00:00
Anuj Modi
c4fa1b65fb
HADOOP-19089: [ABFS] Reverting Back Support of setXAttr() and getXAttr() on root path (#6592)
This reverts most of
HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003).

Calling getXAttr("/") or setXAttr("/") on an abfs container will fail with

`Operation failed: "The request URI is invalid.", HTTP 400 Bad Request`

 
This change is to ensure:
* Consistency across ADLS clients
* Consistency across authentication mechanisms.

Contributed by Anuj Modi
2024-03-25 14:13:24 +00:00
Adnan Hemani
8b2058a4e7
HADOOP-19050. S3A: Support S3 Access Grants (#6544)
This adds support for Amazon S3 Access Grants to the S3A connector.

For more information, see:
* https://aws.amazon.com/s3/features/access-grants/
* https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/

Contributed by Adnan Hemani
2024-03-19 17:49:51 +00:00
drankye
4d88f9892a HADOOP-19085. Compatibility Benchmark over HCFS Implementations
Contributed by Han Liu
2024-03-17 16:48:29 +08:00
Viraj Jasani
a325876fec
HADOOP-19066. Run FIPS test for valid bucket locations (ADDENDUM) (#6624)
FIPS is only supported in north america AWS regions; relevant tests in
ITestS3AEndpointRegion are skipped for buckets with different endpoints/regions.
2024-03-13 13:21:50 +00:00
Viraj Jasani
44c14edac7
HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint (#6539)
Contributed by Viraj Jasani
2024-03-12 18:49:06 +00:00
Steve Loughran
bb32aec88e
HADOOP-19043. S3A: Regression: ITestS3AOpenCost fails on prefetch test runs (#6465)
Disables the new tests added in:

HADOOP-19027. S3A: S3AInputStream doesn't recover from HTTP/channel exceptions #6425

The underlying issue here is that the block prefetch code can identify
when there's a mismatch between declared and actual length, and doesn't
store any of the incomplete buffer.

This should be addressed in HADOOP-18184.

Contributed by Steve Loughran
2024-03-08 12:48:38 +00:00
Anuj Modi
99b9e7fb43
HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport (#6069)
Contributed By: Anuj Modi
2024-02-22 11:49:37 -06:00
Anuj Modi
1336c362e5
Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. (#5881)
Contributed By: Anuj Modi
2024-02-20 11:31:42 -06:00
Steve Loughran
095dfcca30
HADOOP-18088. Replace log4j 1.x with reload4j. (#4052)
Co-authored-by: Wei-Chiu Chuang <weichiu@apache.org>


Includes HADOOP-18354. Upgrade reload4j to 1.22.2 due to XXE vulnerability (#4607). 

Log4j 1.2.17 has been replaced by reloadj 1.22.2
SLF4J is at 1.7.36
2024-02-13 16:33:51 +00:00
Steve Loughran
7651afd3db
HADOOP-19057. S3A: Landsat bucket used in tests no longer accessible (#6515)
The AWS landsat data previously used in some S3A tests is no
longer accessible

This PR moves to the new external file
s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz

* Large enough file for scale tests
* Bucket supports anonymous access
* Ends in .gz to keep codec tests happy
* No spaces in path to keep bucket-info happy

Test Code Changes
* Leaves the test key name alone: fs.s3a.scale.test.csvfile
* Rename all methods and fields move remove "csv" from their names and
  move to "external file" we no longer require it to be CSV.
* Path definition and helper methods have been moved to PublicDatasetTestUtils
* Improve error reporting in ITestS3AInputStreamPerformance if the file
  is too short
  
With S3 Select removed, there is no need for the file to be
a CSV file; there is a test which tries to unzip it; other
tests have a minimum file size.

Consult the JIRA for the settings to add to auth-keys.xml
to switch earlier builds to this same file.

Contributed by Steve Loughran
2024-02-13 10:46:36 +00:00
Sadanand Shenoy
0bf439c0f9
HDFS-17376. Distcp creates Factor 1 replication file on target if Source is EC. (#6540) 2024-02-09 17:26:32 +00:00
Steve Loughran
3f98cb6741
HADOOP-19045. CreateSession Timeout - followup (#6532)
This is a followup to PR:
HADOOP-19045. S3A: Validate CreateSession Timeout Propagation (#6470)

Remove all declarations of fs.s3a.connection.request.timeout
in
- hadoop-common/src/main/resources/core-default.xml
- hadoop-aws/src/test/resources/core-site.xml

New test in TestAwsClientConfig to verify that the value
defined in fs.s3a.Constants class is used.

This is brittle to someone overriding it in their test setups,
but as this test is intended to verify that the option is not
explicitly set, there's no workaround.

Contributed by Steve Loughran
2024-02-07 12:07:54 +00:00
Antonio Murgia
b11159d799
HADOOP-18993. Add option fs.s3a.classloader.isolation (#6301)
The option fs.s3a.classloader.isolation (default: true) can be set to false to disable s3a classloader isolation;

This can assist in using custom credential providers and other extension points.

Contributed by Antonio Murgia
2024-02-05 17:59:36 +00:00
Viraj Jasani
d278b349f6
HADOOP-19044. S3A: AWS SDK V2 - Update region logic (#6479)
Improves region handling in the S3A connector, including enabling cross-region support
when that is considered necessary.

Consult the documentation in connecting.md/connecting.html for the current
resolution process.

Contributed by Viraj Jasani
2024-02-02 17:07:05 +00:00
Viraj Jasani
7504b8505f
HADOOP-18980. S3A credential provider remapping: make extensible (#6406)
Contributed by Viraj Jasani
2024-02-02 17:02:48 +00:00
Steve Loughran
8261229daa
HADOOP-18830. Cut S3 Select (#6144)
Cut out S3 Select
* leave public/unstable constants alone
* s3guard tool will fail with error
* s3afs. path capability will fail
* openFile() will fail with specific error
* s3 select doc updated
* Cut eventstream jar
* New test: ITestSelectUnsupported verifies new failure
  handling above

Contributed by Steve Loughran
2024-01-30 16:12:27 +00:00
Steve Loughran
6da1a19a83
HADOOP-19045. S3A: Validate CreateSession Timeout Propagation (#6470)
New test ITestCreateSessionTimeout to verify that the duration set
in fs.s3a.connection.request.timeout is passed all the way down.

This is done by adding a sleep() in a custom signer and verifying
that it is interrupted and that an AWSApiCallTimeoutException is
raised.

+ Fix testRequestTimeout()
* doesn't skip if considered cross-region
* sets a minimum duration of 0 before invocation
* resets the minimum afterwards

Contributed by Steve Loughran
2024-01-30 15:32:24 +00:00
Pranav Saxena
7dc166ddc7
HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls (#6022)
Address JDK bug JDK-8314978 related to handling of HTTP 100
responses. 

https://bugs.openjdk.org/browse/JDK-8314978

In the AbfsHttpOperation, after sendRequest() we call processResponse()
method from AbfsRestOperation.
Even if the conn.getOutputStream() fails due to expect-100 error, 
we consume the exception and let the code go ahead.
This may call getHeaderField() / getHeaderFields() / getHeaderFieldLong() after
getOutputStream() has failed. These invocation all lead to server calls.

This commit aims to prevent this.
If connection.getOutputStream() fails due to an Expect-100 error,
the ABFS client does not invoke getHeaderField(), getHeaderFields(),
getHeaderFieldLong() or getInputStream().

getResponseCode() is safe as on the failure it sets the
responseCode variable in HttpUrlConnection object.

Contributed by Pranav Saxena
2024-01-21 19:14:54 +00:00
Steve Loughran
d274f778c1
HADOOP-19046. S3A: update AWS V2 SDK to 2.23.5; v1 to 1.12.599 (#6467)
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.

Contributed by Steve Loughran
2024-01-21 19:00:34 +00:00
slfan1989
8444f69511
Preparing for 3.5.0 development (#6411)
Co-authored-by: slfan1989 <slfan1989@apache.org>
2024-01-19 15:05:22 +08:00
Steve Loughran
eeb657e85f
HADOOP-19033. S3A: disable checksums when fs.s3a.checksum.validation = false (#6441)
Add new option fs.s3a.checksum.validation, default false, which
is used when creating s3 clients to enable/disable checksum
validation.

When false, GET response processing is measurably faster.

Contributed by Steve Loughran.
2024-01-17 18:34:14 +00:00
Mukund Thakur
7b1570e2f1
HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool. (#6372)
HADOOP-19015.  Increase fs.s3a.connection.maximum to 500 to minimize the risk of Timeout waiting for connection from the pool

Contributed By: Mukund Thakur
2024-01-16 17:06:28 -06:00
Steve Loughran
d378853790
HADOOP-18975 S3A: Add option fs.s3a.endpoint.fips to use AWS FIPS endpoints (#6277)
Adds a new option `fs.s3a.endpoint.fips` to switch the SDK client to use
FIPS endpoints, as an alternative to explicitly declaring them.


* The option is available as a path capability for probes.
* SDK v2 itself doesn't know that some regions don't have FIPS endpoints
* SDK only fails with endpoint + fips flag as a retried exception; wit this
  change the S3A client should fail fast.
  PR fails fast.
* Adds a new "connecting.md" doc; moves existing docs there and restructures.
* New Tests in ITestS3AEndpointRegion

bucket-info command support:

* added to list of path capabilities
* added -fips flag and test for explicit probe
* also now prints bucket region
* and removed some of the obsolete s3guard options
* updated docs

Contributed by Steve Loughran
2024-01-16 14:16:12 +00:00
Steve Loughran
36198b5edf
HADOOP-19027. S3A: S3AInputStream doesn't recover from HTTP/channel exceptions (#6425)
Differentiate from "EOF out of range/end of GET" from
"EOF channel problems" through
two different subclasses of EOFException and input streams to always
retry on http channel errors; out of range GET requests are not retried.
Currently an EOFException is always treated as a fail-fast call in read()

This allows for all existing external code catching EOFException to handle
both, but S3AInputStream to cleanly differentiate range errors (map to -1)
from channel errors (retry)

- HttpChannelEOFException is subclass of EOFException, so all code
  which catches EOFException is still happy.
  retry policy: connectivityFailure
- RangeNotSatisfiableEOFException is the subclass of EOFException
  raised on 416 GET range errors.
  retry policy: fail
- Method ErrorTranslation.maybeExtractChannelException() to create this
  from shaded/unshaded NoHttpResponseException, using string match to
  avoid classpath problems.
- And do this for SdkClientExceptions with OpenSSL error code WFOPENSSL0035.
  We believe this is the OpenSSL equivalent.
- ErrorTranslation.maybeExtractIOException() to perform this translation as
  appropriate.

S3AInputStream.reopen() code retries on EOF, except on
 RangeNotSatisfiableEOFException,
 which is converted to a -1 response to the caller
 as is done historically.

S3AInputStream knows to handle these with
 read(): HttpChannelEOFException: stream aborting close then retry
 lazySeek(): Map RangeNotSatisfiableEOFException to -1, but do not map
  any other EOFException class raised.

This means that
* out of range reads map to -1
* channel problems in reopen are retried
* channel problems in read() abort the failed http connection so it
  isn't recycled

Tests for this using/abusing mocking.

Testing through actually raising 416 exceptions and verifying that
readFully(), char read() and vector reads are all good.

There is no attempt to recover within a readFully(); there's
a boolean constant switch to turn this on, but if anyone does
it a test will spin forever as the inner PositionedReadable.read(position, buffer, len)
downgrades all EOF exceptions to -1.
A new method would need to be added which controls whether to downgrade/rethrow
exceptions.

What does that mean? Possibly reduced resilience to non-retried failures
on the inner stream, even though more channel exceptions are retried on.

Contributed by Steve Loughran
2024-01-16 14:14:03 +00:00
Steve Loughran
2f1e1558b6
HADOOP-19004. S3A: Support Authentication through HttpSigner API (#6324)
Move to the new auth flow based signers for aws. * Implement a new Signer Initialization Chain
* Add a new instantiation method
* Add a new test
* Fix Reflection Code for SignerInitialization

Contributed by Harshit Gupta
2024-01-11 17:13:31 +00:00
Anuj Modi
e3c135b0b3
HADOOP-18971. [ABFS] Read and cache file footer with fs.azure.footer.read.request.size (#6270)
The option fs.azure.footer.read.request.size sets the size of the footer to
read and cache; the default value of 524288 has been measured to
be good for most workloads running on parquet, ORC and similar file formats.

Contributed by Anuj Modi
2024-01-03 12:49:52 +00:00
Pranav Saxena
0b43026cab
HADOOP-17912. ABFS: Support for Encryption Context (#6221)
Contributed by Pranav Saxena and others.
2024-01-01 19:09:44 +00:00