Commit Graph

1675 Commits

Author SHA1 Message Date
Wei-Chiu Chuang
8af2d2feb2
Update version to 3.3.6 2023-06-12 15:34:41 -07:00
Dongjoon Hyun
20d073cb2c
HADOOP-18718. Fix several maven build warnings (#5592). Contributed by Dongjoon Hyun.
Reviewed-by: Gautham B A <gautham.bangalore@gmail.com>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
(cherry picked from commit fb16e00da0)

Conflicts:
	hadoop-tools/hadoop-federation-balance/pom.xml
2023-06-12 10:40:41 -07:00
Steve Loughran
936e9e15d0
MAPREDUCE-7435. Manifest Committer OOM on abfs (#5519)
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.

The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.

The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.

Contributed by Steve Loughran
2023-06-12 13:43:43 +01:00
monthonk
30dcd044c3
HADOOP-17386. Change default fs.s3a.buffer.dir to be under Yarn container path on yarn applications (#3908)
Co-authored-by: Monthon Klongklaew <monthonk@amazon.com>
Signed-off-by: Akira Ajisaka <aajisaka@apache.org>
2023-06-09 13:40:11 +01:00
Steve Loughran
ab594ec77e
HADOOP-18724. Open file fails with NumberFormatException for S3AFileSystem (#5611)
This:

1. Adds optLong, optDouble, mustLong and mustDouble
   methods to the FSBuilder interface to let callers explicitly
   passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
   now only set long values instead, so as to avoid problems
   related to overloaded methods resulting in a ".0" being appended
   to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
   the new methods
4. And the s3a code is resilient to parse errors in is numeric options
   -it will downgrade to the default.

This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.

For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.

Contributed by Steve Loughran
2023-05-16 13:41:17 +01:00
Viraj Jasani
949d5ca20b
HADOOP-18688. S3A audit header to include count of items in delete ops (#5621)
The auditor-generated http referrer URL now includes the count of keys
to delete in the "ks" query parameter

Contributed by Viraj Jasani
2023-05-16 10:41:52 +01:00
Steve Loughran
0f42c311b8
HADOOP-18695. S3A: reject multipart copy requests when disabled (#5548)
Contributed by Steve Loughran.
2023-05-15 14:19:58 +01:00
HarshitGupta11
f312a0c784
HADOOP-18637: S3A to support upload of files greater than 2 GB using DiskBlocks (#5630) (#5641)
Contributed by Harshit Gupta.
2023-05-15 10:46:33 +01:00
Mukund Thakur
86ad35c94c Revert "HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks (#5630)"
This reverts commit df209dd2e3.

Caused test failures because of incorrect merge conflict resolution.
2023-05-10 14:19:21 -05:00
HarshitGupta11
df209dd2e3
HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks (#5630)
Contributed By: Harshit Gupta and Steve Loughran
2023-05-10 15:58:56 +01:00
Dongjoon Hyun
4670f9e8b0 HADOOP-18727. Fix WriteOperations.listMultipartUploads function description (#5613)
Contributed by Dongjoon Hyun
2023-05-04 13:06:07 +01:00
Viraj Jasani
0ad7d7c677
HADOOP-18697. S3A prefetch: failure of ITestS3APrefetchingInputStream#testRandomReadLargeFile (#5580)
Contributed by Viraj Jasani
2023-05-02 15:45:37 +01:00
Viraj Jasani
05edfee1f3
HADOOP-18399. S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator (#5054)
Contributed by Viraj Jasani
2023-04-28 12:03:30 +01:00
Daniel Carl Jones
0e51a9b55e
HADOOP-18482. ITestS3APrefetchingInputStream to skip if CSV test file unavailable (#4983)
Contributed by Danny Jones
2023-04-28 12:03:30 +01:00
Steve Loughran
8fafc83749
HADOOP-18410. S3AInputStream.unbuffer() does not release http connections -prefetch changes(#4766)
Changes in HADOOP-18410 which are needed for the S3A prefetching stream; needed
as part of the HADOOP-18703 backport

Change-Id: Ib403ca793e29a4416e5d892f9081de5832da3b68
2023-04-28 12:03:30 +01:00
Viraj Jasani
a71c708d17
HADOOP-18189 S3APrefetchingInputStream to support status probes when closed (#5036)
Contributed by Viraj Jasani
2023-04-28 12:03:30 +01:00
Ashutosh Gupta
5ba5980731
HADOOP-18531. Fix assertion failure in ITestS3APrefetchingInputStream (#5149)
This patch MUST be applied to all branches containing HADOOP-18378
so as to ensure reliable test runs.

Contributed by Ashutosh Gupta
2023-04-28 12:03:30 +01:00
Alessandro Passaro
0f1a3f23a5
HADOOP-18378. Implement lazy seek in S3A prefetching. (#4955)
Make S3APrefetchingInputStream.seek() completely lazy. Calls to seek() will not affect the current buffer nor interfere with prefetching, until read() is called.

This change allows various usage patterns to benefit from prefetching, e.g. when calling readFully(position, buffer) in a loop for contiguous positions the intermediate internal calls to seek() will be noops and prefetching will have the same performance as in a sequential read.

Contributed by Alessandro Passaro.
2023-04-28 12:03:30 +01:00
Steve Loughran
bb08c90228
HADOOP-18416. fix ITestS3AIOStatisticsContext test failure (#4931)
Uncomment the S3ATestUtils-side part of the original patch.
2023-04-28 12:03:30 +01:00
Viraj Jasani
0fd36df1d2
HADOOP-18377. hadoop-aws build to add a -prefetch profile to run all tests with prefetching (#4914)
Contributed by Viraj Jasani
2023-04-28 12:03:30 +01:00
Viraj Jasani
76e243aacb
HADOOP-18466. Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field (#4926)
Follow-on to HADOOP-18455.

Contributed by Viraj Jasani
2023-04-28 12:03:30 +01:00
Viraj Jasani
f07be3bec2
HADOOP-18455. S3A prefetching executor should be closed (#4879)
follow-on patch to HADOOP-18186. 

Contributed by: Viraj Jasani
2023-04-28 12:03:30 +01:00
Viraj Jasani
1c2c6785a0
HADOOP-18186. s3a prefetching to use SemaphoredDelegatingExecutor for submitting work (#4796)
Contributed by Viraj Jasani
2023-04-28 12:03:30 +01:00
Viraj Jasani
f00d77fda4
HADOOP-18380. fs.s3a.prefetch.block.size to be read through longBytesOption (#4762)
Contributed by Viraj Jasani.
2023-04-28 12:03:30 +01:00
Steve Loughran
4ce763a322
HADOOP-18028. High performance S3A input stream (#4752)
This is the the preview release of the HADOOP-18028 S3A performance input stream.
It is still stabilizing, but ready to test.

Contains

HADOOP-18028. High performance S3A input stream (#4109)
	Contributed by Bhalchandra Pandit.

HADOOP-18180. Replace use of twitter util-core with java futures (#4115)
	Contributed by PJ Fanning.

HADOOP-18177. Document prefetching architecture. (#4205)
	Contributed by Ahmar Suhail

HADOOP-18175. fix test failures with prefetching s3a input stream (#4212)
 Contributed by Monthon Klongklaew

HADOOP-18231.  S3A prefetching: fix failing tests & drain stream async.  (#4386)

	* adds in new test for prefetching input stream
	* creates streamStats before opening stream
	* updates numBlocks calculation method
	* fixes ITestS3AOpenCost.testOpenFileLongerLength
	* drains stream async
	* fixes failing unit test

	Contributed by Ahmar Suhail

HADOOP-18254. Disable S3A prefetching by default. (#4469)
	Contributed by Ahmar Suhail

HADOOP-18190. Collect IOStatistics during S3A prefetching (#4458)

	This adds iOStatisticsConnection to the S3PrefetchingInputStream class, with
	new statistic names in StreamStatistics.

	This stream is not (yet) IOStatisticsContext aware.

	Contributed by Ahmar Suhail

HADOOP-18379 rebase feature/HADOOP-18028-s3a-prefetch to trunk
HADOOP-18187. Convert s3a prefetching to use JavaDoc for fields and enums.
HADOOP-18318. Update class names to be clear they belong to S3A prefetching
	Contributed by Steve Loughran
2023-04-28 12:03:29 +01:00
Sebastian Baunsgaard
919c3f615b
HADOOP-18660. Filesystem Spelling Mistake (#5475).
Contributed by Sebastian Baunsgaard.

Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-04-25 19:59:54 +01:00
Tamas Domok
1b59e3123b
HADOOP-18705. ABFS should exclude incompatible credential providers. (#5560)
Contributed by Tamas Domok.
2023-04-24 15:48:02 +01:00
Steve Loughran
f5464831a0
HADOOP-18696. ITestS3ABucketExistence arn test failures. (#5557)
Explicitly sets the fs.s3a.endpoint.region to eu-west-1 so
the ARN-referenced fs creation fails with unknown store
rather than IllegalArgumentException.

Steve Loughran
2023-04-17 10:21:01 +01:00
sreeb-msft
f324efd247
HADOOP-18012. ABFS: Enable config controlled ETag check for Rename idempotency (#5488)
To support recovery of network failures during rename, the abfs client
fetches the etag of the source file, and when recovering from a
failure, uses this tag to determine whether the rename succeeded
before the failure happened.

* This works for files, but not directories
* It adds the overhead of a HEAD request before each rename.
* The option can be disabled by setting "fs.azure.enable.rename.resilience"
  to false

Contributed by Sree Bhattacharyya
2023-04-05 15:07:39 +01:00
HarshitGupta11
42ed2b9075
HADOOP-18684. S3A filesystem to support binding to to other URI schemes (#5521)
Contributed by Harshit Gupta
2023-04-05 14:57:27 +01:00
Pranav Saxena
054afa1180
HADOOP-18647. x-ms-client-request-id to identify the retry of an API. (#5437)
The x-ms-client-request-id now includes a field to indicate a call is a retry of a previous
operation

Contributed by Pranav Saxena
2023-03-30 14:26:12 +01:00
Anmol Asrani
6306f5b2bc
HADOOP-18146: ABFS: Added changes for expect hundred continue header #4039
This change lets the client react pre-emptively to server load without getting to 503 and the exponential backoff
which follows. This stops performance suffering so much as capacity limits are approached for an account.

Contributed by Anmol Asranii
2023-03-28 16:32:01 +01:00
Pranav Saxena
2b156c2b32
HADOOP-18606. ABFS: Add reason in x-ms-client-request-id on a retried API call. (#5299)
Contributed by Pranav Saxena
2023-03-28 12:00:57 +01:00
Masatake Iwasaki
dd9ef9e0e7
HADOOP-17746. Compatibility table in directory_markers.md doesn't render right. (#3116)
Contributed by Masatake Iwasaki
2023-03-15 17:11:30 +00:00
Steve Loughran
b75ced1e5d
HADOOP-17836. Improve logging on ABFS error reporting (#3281)
Contributed by Steve Loughran.
2023-03-08 15:31:16 +00:00
Steve Loughran
bca38f84af
HADOOP-18641. Cloud connector dependency and LICENSE fixup. (#5429)
POM and LICENSE fixup of transient dependencies
* Exclude hadoop-cloud-storage imports which come in with hadoop-common
* Add explicit import of hadoop's org.codehaus.jettison declaration
  to hadoop-aliyun
* Tune aliyun jars imports
* Cut duplicate and inconsistent hbase-server declarations from
  hadoop-project
* Update LICENSE-binary for the current set of libraries in the
  hadoop 3.3.5 release.

Contributed by Steve Loughran
2023-02-28 14:05:13 +00:00
Ayush Saxena
84e999b35c
HADOOP-18582. Addendum: Skip unnecessary cleanup logic in DistCp. (#5409)
Followup to the original HADOOP-18582.

Temporary path cleanup is re-enabled for -append jobs
as these will create temporary files when creating or overwriting files.

Contributed by Ayush Saxena
2023-02-22 19:32:05 +00:00
Mehakmeet Singh
a3b0135ea6
HADOOP-18633. fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip (#5422)
Contributed by: Mehakmeet Singh
2023-02-22 14:31:46 +05:30
Mehakmeet Singh
a2ceb09323
HADOOP-18596. Distcp -update to use modification time while checking for file skip. (#5387)
Adding toggleable support for modification time during distcp -update between two stores with incompatible checksum comparison.

Contributed by: Mehakmeet Singh <mehakmeet.singh.behl@gmail.com>
2023-02-14 15:17:27 +05:30
kevin wan
5cd006455d HADOOP-18582. skip unnecessary cleanup logic in distcp (#5251)
Co-authored-by: 万康 <mingge@xiaohongshu.com>
Reviewed-by: Steve Loughran <stevel@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: Chris Nauroth <cnauroth@apache.org>
(cherry picked from commit 3b7b79b37a)
2023-01-24 23:50:11 +00:00
Steve Loughran
c59444b160
HADOOP-18577. Followup: javadoc fix (#5232)
Fixes a javadoc error which came with
HADOOP-18577. ABFS: Add probes of readahead fix (#5205)

Part of the HADOOP-18521 ABFS readahead fix; MUST be included.

Contributed by Steve Loughran
2022-12-18 12:20:41 +00:00
Steve Loughran
daa33aafff
HADOOP-18577. ABFS: Add probes of readahead fix (#5205)
Followup patch to  HADOOP-18456 as part of HADOOP-18521,
ABFS ReadBufferManager buffer sharing across concurrent HTTP requests

Add probes of readahead fix aid in checking safety of
hadoop ABFS client across different releases.

* ReadBufferManager constructor logs the fact it is safe at TRACE
* AbfsInputStream declares it is fixed in toString()
  by including fs.azure.capability.readahead.safe" in the
  result.

The ABFS FileSystem hasPathCapability("fs.azure.capability.readahead.safe")
probe returns true to indicate the client's readahead manager has been fixed
to be safe when prefetching.

All Hadoop releases for which probe this returns false
and for which the probe "fs.capability.etags.available"
returns true at risk of returning invalid data when reading
ADLS Gen2/Azure storage data.

Contributed by Steve Loughran.
2022-12-15 17:11:22 +00:00
Steve Loughran
ba55f370a9
HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references (#5144)
This has triggered an OOM in a process which was churning through s3a fs
instances; the increased memory footprint of IOStatistics amplified what
must have been a long-standing issue with FS instances being created
and not closed()

*  Makes sure instrumentation is closed when the FS is closed.
*  Uses a weak reference from metrics to instrumentation, so even
   if the FS wasn't closed (see HADOOP-18478), this back reference
   would not cause the S3AInstrumentation reference to be retained.
*  If S3AFileSystem is configured to log at TRACE it will log the
   calling stack of initialize(), so help identify where the
   instance is being created. This should help track down
   the cause of instance leakage.

Contributed by Steve Loughran.
2022-12-14 18:23:04 +00:00
Steve Loughran
654082773c
HADOOP-18183. s3a audit logs to publish range start/end of GET requests. (#5110)
The start and end of the range is set in a new audit param "rg",
e.g "?rg=100-200"

Contributed by Ankit Saurabh
2022-12-14 16:51:46 +00:00
Pranav Saxena
50a0f33cc9
HADOOP-18546. ABFS. disable purging list of in progress reads in abfs stream close() (#5176)
This addresses HADOOP-18521, "ABFS ReadBufferManager buffer sharing
across concurrent HTTP requests" by not trying to cancel
in progress reads.

It supercedes HADOOP-18528, which disables the prefetching.
If that patch is applied *after* this one, prefetching
will be disabled.

As well as changing the default value in the code,
core-default.xml is updated to set
fs.azure.enable.readahead = true

As a result, if Configuration.get("fs.azure.enable.readahead")
returns a non-null value, then it can be inferred that
it was set in or core-default.xml (the fix is present)
or in core-site.xml (someone asked for it).

Note: this commit contains the followup commit:
That is needed to avoid race conditions in the test.

Contributed by Pranav Saxena.
2022-12-09 13:49:14 +00:00
Oleksandr Shevchenko
dafc9ef8b6
HADOOP-18563. Misleading AWS SDK S3 timeout configuration comment (#5197)
Contributed by Oleksandr Shevchenko
2022-12-08 15:12:58 +00:00
Anmol Asrani
1cc8cb68f2
HADOOP-18457. ABFS: Support account level throttling (#5034)
This allows  abfs request throttling to be shared across all
abfs connections talking to containers belonging to the same abfs storage
account -as that is the level at which IO throttling is applied.

The option is enabled/disabled in the configuration option
"fs.azure.account.throttling.enabled";
The default is "true"

Contributed by Anmol Asrani
2022-11-30 13:14:11 +00:00
sreeb-msft
00249619a0
HADOOP-18498. ABFS: Remove unwanted ? prefix from SAS Tokens (#5136)
This commit parses SAS Tokens and removes the unwanted prefix of '?' from them, if present.

At present, SAS Tokens are provided to the driver through customer implementations of the SASTokenProvider interface. The SAS token providers should not assume that the token will be the first query parameter in the URIs that communicate with the backend. However, it was observed that certain public interfaces provided by Storage to generate SAS can include the '?' as the first character of the SAS Token, which would ideally be the case when it is the first query parameter. Thus, tokens that contain this prefix will lead to an error in the driver due to a clash of query parameters.

To avoid failures for use of such SAS tokens, after receiving the SAS Token from the provider, the code checks for whether any ? prefix is present or not. If yes, it is removed before further usage of the token. This way, users would not have to manually remove the prefix before passing it on as a configuration.

Contributed by Sree Bhattacharya
2022-11-28 11:40:06 +00:00
Mehakmeet Singh
9e53ed3602
HADOOP-18528. Disable abfs prefetching by default (#5134)
Disables block prefetching on ABFS InputStreams, by setting
fs.azure.enable.readahead to false in core-default.xml and
the matching java constant.

This prevents
HADOOP-18521. ABFS ReadBufferManager buffer sharing across concurrent HTTP requests.

Once a fix for that is committed, this change can be reverted.

Contributed by Mehakmeet Singh.
2022-11-15 14:29:33 +00:00
Steve Loughran
b1ea32f91c
HADOOP-18517. ABFS: Add fs.azure.enable.readahead option to disable readahead (#5103)
* HADOOP-18517. ABFS: Add fs.azure.enable.readahead option to disable readahead

Adds new config option to turn off readahead
* also allows it to be passed in through openFile(),
* extends ITestAbfsReadWriteAndSeek to use the option, including one
  replicated test...that shows that turning it off is slower.

Important: this does not address the critical data corruption issue
HADOOP-18521. ABFS ReadBufferManager buffer sharing across concurrent HTTP requests

What is does do is provide a way to completely bypass the ReadBufferManager.
To mitigate the problem, either fs.azure.enable.readahead needs to be set to false,
or set "fs.azure.readaheadqueue.depth" to 0 -this still goes near the (broken)
ReadBufferManager code, but does't trigger the bug.

For safe reading of files through the ABFS connector, readahead MUST be disabled
or the followup fix to HADOOP-18521 applied

Contributed by Steve Loughran
2022-11-08 13:41:31 +00:00