Commit Graph

634 Commits

Author SHA1 Message Date
Mukund Thakur
01bde4afff Revert "HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool"
Pushed it by mistake. So sorry.
This reverts commit e28f83a1eb.
2023-12-19 14:12:21 -06:00
Mukund Thakur
e28f83a1eb HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool 2023-12-19 14:04:07 -06:00
Steve Loughran
25089dc9ee
HADOOP-18997. S3A: Add option fs.s3a.s3express.create.session to disable S3Express CreateSession (#6316)
Adds a new option fs.s3a.s3express.create.session; default is true.

When false, this disables the CreateSession call to create/refresh temporary
session credentials when working with an Amazon S3 Express store.

This avoids having to give the caller the new IAM role permission,
at the expense of every remote call on the S3 Express store having to
include the latency of a checkup of IAM permissions.

* fs.s3a.s3express.create.session is set to false in tests which generate new
  role permissions and call AssumeRole
* move ApiCallTimeoutException logic until after sdk exceptions get translated
  to IOE. This lines up for any future AWS throwing up underlying cause here.
* Tests will automatically skip ACL, storage class, S3 Select or encryption tests
when target fs is S3Express.
* same for the out of order multipart uploader test cases, v1 listing.
* bucket tool s3 test treats invalid location error as a successful invocation
  of the create bucket attempt

Contributed by Steve Loughran
2023-12-07 13:08:47 +00:00
Steve Loughran
e221231e81
HADOOP-18996. S3A to provide full support for S3 Express One Zone (#6308)
This adds borad support for Amazon S3 Express One Zone to the S3A connector,
particularly resilience of other parts of the codebase to LIST operations returning
paths under which only in-progress uploads are taking place.

hadoop-common and hadoop-mapreduce treewalking routines all cope with this;
distcp is left alone.

There are still some outstanding followup issues, and we expect more to surface
with extended use.

Contains HADOOP-18955. AWS SDK v2: add path capability probe "fs.s3a.capability.aws.v2
* lets us probe for AWS SDK version
* bucket-info reports it

Contains HADOOP-18961 S3A: add s3guard command "bucket"

hadoop s3guard bucket -create -region us-west-2 -zone usw2-az2 \
  s3a://stevel--usw2-az2--x-s3/

* requires -zone if bucket is zonal
* rejects it if not
* rejects zonal bucket suffixes if endpoint is not aws (safety feature)
* imperfect, but a functional starting point.

New path capability "fs.s3a.capability.zonal.storage"
* Used in tests to determine whether pending uploads manifest paths
* cli tests can probe for this
* bucket-info reports it
* some tests disable/change assertions as appropriate

----

Shell commands fail on S3Express buckets if pending uploads.

New path capability in hadoop-common
   "fs.capability.directory.listing.inconsistent"

1. S3AFS returns true on a S3 Express bucket
2. FileUtil.maybeIgnoreMissingDirectory(fs, path, fnfe)
   decides whether to swallow the exception or not.
3. This is used in: Shell, FileInputFormat, LocatedFileStatusFetcher

Fixes with tests
* fs -ls -R
* fs -du
* fs -df
* fs -find
* S3AFS.getContentSummary() (maybe...should discuss)
* mapred LocatedFileStatusFetcher
* Globber, HADOOP-15478 already fixed that when dealing with
   S3 inconsistencies
* FileInputFormat

S3Express CreateSession request is permitted outside audit spans.

S3 Bulk Delete calls request the store to return the list of deleted objects
if RequestFactoryImpl is set to trace.
log4j.logger.org.apache.hadoop.fs.s3a.impl.RequestFactoryImpl=TRACE

Test Changes
 * ITestS3AMiscOperations removes all tests which require unencrypted
   buckets. AWS S3 defaults to SSE-S3 everywhere.
 * ITestBucketTool to test new tool without actually creating new
   buckets.
 * S3ATestUtils add methods to skip test suites/cases if store is/is not
   S3Express
 * Cutting down on "is this a S3Express bucket" logic to trailing --x-s3 string
   and not worrying about AZ naming logic. commented out relevant tests.
 * ITestTreewalkProblems validated against standard and S3Express stores

Outstanding

 * Distcp: tests show it fails. Proposed: release notes.

---

x-amz-checksum header not found when signing S3Express messages

This modifies the custom signer in ITestCustomSigner to be a subclass
of AwsS3V4Signer with a goal of preventing signing problems with
S3 Express stores.

----

RemoteFileChanged renaming multipart file

Maps 412 status code to RemoteFileChangedException

Modifies huge file tests
-Adds a check on etag match for stat vs list
-ITestS3AHugeFilesByteBufferBlocks renames parent dirs, rather than
 files, to replicate distcp better.

----

S3Express custom Signing cannot handle bulk delete

Copy custom signer into production JAR, so enable downstream testing

Extend ITestCustomSigner to cover more filesystem operations
- PUT
- POST
- COPY
- LIST
- Bulk delete through delete() and rename()
- list + abort multipart uploads

Suite is parameterized on bulk delete enabled/disabled.

To use the new signer for a full test run:

<property>
  <name>fs.s3a.custom.signers</name>
  <value>CustomSdkSigner:org.apache.hadoop.fs.s3a.auth.CustomSdkSigner</value>
</property>

<property>
  <name>fs.s3a.s3.signing-algorithm</name>
  <value>CustomSdkSigner</value>
</property>
2023-12-01 14:16:33 +00:00
Steve Loughran
5cda162a80
HADOOP-18915. Tune/extend S3A http connection and thread pool settings (#6180)
Increases existing pool sizes, as with server scale and vector
IO, larger pools are needed

  fs.s3a.connection.maximum 200
  fs.s3a.threads.max 96

Adds new configuration options for v2 sdk internal timeouts,
both with default of 60s:

  fs.s3a.connection.acquisition.timeout
  fs.s3a.connection.idle.time

All the pool/timoeut options are covered in performance.md

Moves all timeout/duration options in the s3a FS to taking
temporal units (h, m, s, ms,...); retaining the previous default
unit (normally millisecond)

Adds a minimum duration for most of these, in order to recover from
deployments where a timeout has been set on the assumption the unit
was seconds, not millis.

Uses java.time.Duration throughout the codebase;
retaining the older numeric constants in
org.apache.hadoop.fs.s3a.Constants for backwards compatibility;
these are now deprecated.

Adds new class AWSApiCallTimeoutException to be raised on
sdk-related methods and also gateway timeouts. This is a subclass
of org.apache.hadoop.net.ConnectTimeoutException to support
existing retry logic.

+ reverted default value of fs.s3a.create.performance to false; 
inadvertently set to true during testing.

Contributed by Steve Loughran.
2023-11-29 15:12:44 +00:00
Steve Loughran
476b90f3e5
HADOOP-18965. ITestS3AHugeFilesEncryption failure (#6261)
Followup to:
HADOOP-18850 Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)

Contributed by Steve Loughran
2023-11-24 17:24:12 +00:00
Viraj Jasani
f1e4376626
HADOOP-18959. Use builder for prefetch CachingBlockManager. (#6240) Contributed by Viraj Jasani 2023-11-23 11:07:44 +00:00
Steve Loughran
b108e9e2d8
HADOOP-18969. S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance (#6264)
Add the option to the removeBaseAndBucketOverrides() list
2023-11-21 14:55:13 +00:00
PJ Fanning
f609460bda
HADOOP-18957. Use StandardCharsets.UTF_8 (#6231). Contributed by PJ Fanning.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-11-20 23:44:48 +05:30
Steve Loughran
ef7fb64764
HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163)
Add a new option:
fs.s3a.optimized.copy.from.local.enabled

This will enable (default) or disable the
optimized CopyFromLocalOperation upload operation
when copyFromLocalFile() is invoked.

When false the superclass implementation is used; duration
statistics are still collected, though audit span entries
in logs will be for the individual fs operations, not the
overall operation.

Contributed by Steve Loughran
2023-11-06 16:00:56 +00:00
Viraj Jasani
cf3a4b3bb7
HADOOP-18850. S3A: Enable dual-layer server-side encryption with AWS KMS keys (#6140)
Contributed by Viraj Jasani
2023-11-01 13:30:35 +00:00
Steve Loughran
7ec636deec
HADOOP-18930. Make fs.s3a.create.performance a bucket-wide setting. (#6168)
If fs.s3a.create.performance is set on a bucket
- All file overwrite checks are skipped, even if the caller says
  otherwise.
- All directory existence checks are skipped.
- Marker deletion is *always* skipped.

This eliminates a HEAD and a LIST for every creation.

* New path capability "fs.s3a.create.performance.enabled" true
  if the option is enabled.
* Parameterize ITestS3AContractCreate to expect the different
  outcomes
* Parameterize ITestCreateFileCost similarly, with
  changed cost assertions there.
* create(/) raises an IOE. existing bug only noticed here.

Contributed by Steve Loughran
2023-10-27 12:23:55 +01:00
Steve Loughran
8bd1f65efc
HADOOP-18948. S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete (#6218)
S3A directory delete and rename will optionally abort all pending multipart uploads
in their under their to-be-deleted paths when.

fs.s3a.directory.operations.purge.upload is true

It is off by default.

The filesystems hasPathCapability("fs.s3a.directory.operations.purge.upload")
probe will return true when this feature is enabled.

Multipart uploads may accrue from interrupted data writes, uncommitted 
staging/magic committer jobs and other operations/applications. On AWS S3
lifecycle rules are the recommended way to clean these; this change improves
support for stores which lack these rules.

Contributed by Steve Loughran
2023-10-25 17:39:16 +01:00
Steve Loughran
8b974bcc1f
HADOOP-18889. Third party storage followup. (#6186)
Followup to HADOOP-18889 third party store support;

Fix some minor review comments which came in after the merge.
2023-10-24 18:17:52 +01:00
Steve Loughran
3e0fcda7a5
HADOOP-18945. S3A. IAMInstanceCredentialsProvider failing. (#6202)
This restores asynchronous retrieval/refresh of any AWS credentials provided by the
EC2 instance/container in which the process is running.

Contributed by Steve Loughran
2023-10-23 14:24:30 +01:00
Viraj Jasani
acaf8ef3ca
HADOOP-18918. ITestS3GuardTool fails if SSE/DSSE encryption is used (#6165)
HADOOP-18918. ITestS3GuardTool fails if SSE/DSSE encryption is used.

Contributed by Viraj Jasani.
2023-10-20 10:47:44 +01:00
Steve Loughran
215cb15beb
HADOOP-18946. TestErrorTranslation failure (#6205)
Fixes TestErrorTranslation.testMultiObjectExceptionFilledIn() failure
which came in with HADOOP-18939.

Contributed by Steve Loughran
2023-10-20 10:13:05 +01:00
Steve Loughran
e0563fed50
HADOOP-18908. Improve S3A region handling. (#6187)
S3A region logic improved for better inference and
to be compatible with previous releases

1. If you are using an AWS S3 AccessPoint, its region is determined
   from the ARN itself.
2. If fs.s3a.endpoint.region is set and non-empty, it is used.
3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, 
   the region is determined by by parsing the URL 
   Note: vpce endpoints are not handled by this.
4. If fs.s3a.endpoint.region==null, and none could be determined
   from the endpoint, use us-east-2 as default.
5. If fs.s3a.endpoint.region=="" then it is handed off to
   The default AWS SDK resolution process.

Consult the AWS SDK documentation for the details on its resolution
process, knowing that it is complicated and may use environment variables,
entries in ~/.aws/config, IAM instance information within
EC2 deployments and possibly even JSON resources on the classpath.
Put differently: it is somewhat brittle across deployments.

Contributed by Ahmar Suhail
2023-10-17 15:37:36 +01:00
Steve Loughran
e5eb404bb3
HADOOP-18939. NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry() (#6193)
MultiObjectDeleteException to fill in the error details

See also: https://github.com/aws/aws-sdk-java-v2/issues/4600

Contributed by Steve Loughran
2023-10-17 15:17:16 +01:00
Steve Loughran
81edbebdd8
HADOOP-18889. S3A v2 SDK third party support (#6141)
Tune AWS v2 SDK changes based on testing with third party stores
including GCS. 

Contains HADOOP-18889. S3A v2 SDK error translations and troubleshooting docs

* Changes needed to work with multiple third party stores
* New third_party_stores document on how to bind to and test
  third party stores, including google gcs (which works!)
* Troubleshooting docs mostly updated for v2 SDK

Exception translation/resilience

* New AWSUnsupportedFeatureException for unsupported/unavailable errors
* Handle 501 method unimplemented as one of these
* Error codes > 500 mapped to the AWSStatus500Exception if no explicit
  handler.
* Precondition errors handled a bit better
* GCS throttle exception also recognized.
* GCS raises 404 on a delete of a file which doesn't exist: swallow it.
* Error translation uses reflection to create IOE of the right type.
  All IOEs at the bottom of an AWS stack chain are regenerated.
  then a new exception of that specific type is created, with the top level ex
  its cause. This is done to retain the whole stack chain.
* Reduce the number of retries within the AWS SDK
* And those of s3a code.
* S3ARetryPolicy explicitly declare SocketException as connectivity failure
  but subclasses BindException
* SocketTimeoutException also considered connectivity  
* Log at debug whenever retry policies looked up
* Reorder exceptions to alphabetical order, with commentary
* Review use of the Invoke.retry() method 

 The reduction in retries is because its clear when you try to create a bucket
 which doesn't resolve that the time for even an UnknownHostException to
 eventually fail over 90s, which then hit the s3a retry code.
 - Reducing the SDK retries means these escalate to our code better.
 - Cutting back on our own retries makes it a bit more responsive for most real
 deployments.
 - maybeTranslateNetworkException() and s3a retry policy means that
   unknown host exception is recognised and fails fast.

Contributed by Steve Loughran
2023-10-12 17:47:44 +01:00
Viraj Jasani
27cb551821
HADOOP-18829. S3A prefetch LRU cache eviction metrics (#5893)
Contributed by: Viraj Jasani
2023-09-21 14:31:44 +05:30
Syed Shameerur Rahman
5512c9f924
HADOOP-18797. Support Concurrent Writes With S3A Magic Committer (#6006)
Jobs which commit their work to S3 thr
magic committer now use a unique magic
containing the job ID:
 __magic_job-${jobid}

This allows for multiple jobs to write
to the same destination simultaneously.

Contributed by Syed Shameerur Rahman
2023-09-20 11:26:42 +01:00
Steve Loughran
120620c1b7
HADOOP-18888. S3A. createS3AsyncClient() always enables multipart uploads (#6056)
* The multipart flag fs.s3a.multipart.uploads.enabled is passed to the async client created
* s3A connector bypasses the transfer manager entirely if disabled or for small files.

Contributed by Steve Loughran
2023-09-15 15:45:09 +01:00
Steve Loughran
81d90fd65b
HADOOP-18073. S3A: Upgrade AWS SDK to V2 (#5995)
This patch migrates the S3A connector to use the V2 AWS SDK.

This is a significant change at the source code level.
Any applications using the internal extension/override points in
the filesystem connector are likely to break.

This includes but is not limited to:
- Code invoking methods on the S3AFileSystem class
  which used classes from the V1 SDK.
- The ability to define the factory for the `AmazonS3` client, and
  to retrieve it from the S3AFileSystem. There is a new factory
  API and a special interface S3AInternals to access a limited
  set of internal classes and operations.
- Delegation token and auditing extensions.
- Classes trying to integrate with the AWS SDK.

All standard V1 credential providers listed in the option 
fs.s3a.aws.credentials.provider will be automatically remapped to their
V2 equivalent.

Other V1 Credential Providers are supported, but only if the V1 SDK is
added back to the classpath.  

The SDK Signing plugin has changed; all v1 signers are incompatible.
There is no support for the S3 "v2" signing algorithm.

Finally, the aws-sdk-bundle JAR has been replaced by the shaded V2
equivalent, "bundle.jar", which is now exported by the hadoop-aws module.

Consult the document aws_sdk_upgrade for the full details.

Contributed by Ahmar Suhail + some bits by Steve Loughran
2023-09-11 14:30:25 +01:00
Mukund Thakur
28d190b904
HADOOP-18845. Add ability to configure s3 connection ttl using fs.s3a.connection.ttl (#5948)
Contributed By: Mukund Thakur
2023-08-25 12:23:17 -05:00
Yuting Chen
ce5bc4891f
HADOOP-18328. Add documentation for S3A support on S3 Outposts (#5976)
Contributed by Yuting Chen
2023-08-24 10:16:10 +01:00
suzu
70b6c155bc
HADOOP-18328. S3A to support S3 on Outposts (#4533)
Contributed by Sotetsu Suzugamine
2023-08-23 11:38:07 +01:00
ahmarsuhail
24f5f708df
HADOOP-18778. Fixes failing tests when CSE is enabled. (#5763)
Contributed By: Ahmar Suhail <ahmarsu@amazon.co.uk>
2023-07-26 21:56:49 +05:30
Viraj Jasani
90793e1bce
HADOOP-18805. S3A prefetch tests to work with small files (#5851)
Contributed by Viraj Jasani
2023-07-24 19:36:57 +01:00
Steve Loughran
104a323f6f
HADOOP-18795. S3A DelegationToken plugin to expand return type of binding (#5821)
Adds a class DelegationBindingInfo which contains binding info
beyond just the AWS credential list.


The binding class can be expanded when needed. Until then, all existing
implementations will work, as the new method
  DelegationBindingInfo deploy(AbstractS3ATokenIdentifier retrievedIdentifier)
falls back to the original methods.
2023-07-20 11:27:58 +01:00
Moditha Hewasinghage
b6b259066f
HADOOP-18757. S3A Committer only finalizes the commits in a single thread (#5706)
Contributed by Moditha Hewasinghage
2023-07-19 10:03:41 +01:00
Viraj Jasani
e7d74f3d59
HADOOP-18291. S3A prefetch - Implement thread-safe LRU cache for SingleFilePerBlockCache (#5754)
Contributed by Viraj Jasani
2023-07-14 10:21:01 +01:00
Harunobu Daikoku
e3683a954f
HADOOP-18793. S3A StagingCommitter does not clean up staging-uploads directory (#5818)
Contributed by Harunobu Daikoku
2023-07-08 12:53:54 +01:00
Steve Loughran
7a45ef4164
MAPREDUCE-7435. Manifest Committer OOM on abfs (#5519)
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.

The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.

The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.

Contributed by Steve Loughran
2023-06-09 17:00:59 +01:00
Steve Loughran
7bb09f1010
HADOOP-18752. Change fs.s3a.directory.marker.retention to "keep" (#5689)
This 
1. changes the default value of fs.s3a.directory.marker.retention
   to "keep"
2. no longer prints a message when an S3A FS instances is
   instantiated with any option other than delete.

Switching to marker retention improves performance
on any S3 bucket as there are no needless marker DELETE requests
-leading to a reduction in write IOPS and and any delays waiting
for the DELETE call to finish.

There are *very* significant improvements on versioned buckets,
where tombstone markers slow down LIST operations: the more
tombstones there are, the worse query planning gets.

Having versioning enabled on production stores is the foundation
of any data protection strategy, so this has tangible benefits
in production.

It is *not* compatible with older hadoop releases; specifically
- Hadoop branch 2 < 2.10.2
- Any release of Hadoop 3.0.x and Hadoop 3.1.x
- Hadoop 3.2.0 and 3.2.1
- Hadoop 3.3.0
Incompatible releases have no problems reading data in stores
where markers are retained, but can get confused when deleting
or renaming directories.

If you are still using older versions to write to data, and cannot
yet upgrade, switch the option back to "delete"

Contributed by Steve Loughran
2023-06-08 12:12:29 +01:00
Steve Loughran
e6b54f7f68
Revert "HADOOP-18706. Improve S3ABlockOutputStream recovery (#5563)"
This reverts commit 372631c566.

Reverted due to HADOOP-18744.
2023-05-24 19:22:22 +01:00
Viraj Jasani
bef40e9427
HADOOP-18688. S3A audit header to include count of items in delete ops (#5621)
The auditor-generated http referrer URL now includes the count of keys
to delete in the "ks" query parameter

Contributed by Viraj Jasani
2023-05-16 10:40:16 +01:00
Steve Loughran
e76c09ac3b
HADOOP-18724. Open file fails with NumberFormatException for S3AFileSystem (#5611)
This:

1. Adds optLong, optDouble, mustLong and mustDouble
   methods to the FSBuilder interface to let callers explicitly
   passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
   now only set long values instead, so as to avoid problems
   related to overloaded methods resulting in a ".0" being appended
   to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
   the new methods
4. And the s3a code is resilient to parse errors in is numeric options
   -it will downgrade to the default.

This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.

For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.

Contributed by Steve Loughran
2023-05-11 17:57:25 +01:00
Chris
372631c566
HADOOP-18706. Improve S3ABlockOutputStream recovery (#5563)
Contributed by Chris Bevard
2023-05-05 11:57:42 +01:00
Dongjoon Hyun
27776ac45e
HADOOP-18727. Fix WriteOperations.listMultipartUploads function description (#5613)
Contributed by Dongjoon Hyun
2023-05-04 13:03:48 +01:00
Viraj Jasani
bfcf5dd03b
HADOOP-18697. S3A prefetch: failure of ITestS3APrefetchingInputStream#testRandomReadLargeFile (#5580)
Contributed by Viraj Jasani
2023-05-02 15:21:46 +01:00
Steve Loughran
eb749ddd4d
HADOOP-18695. S3A: reject multipart copy requests when disabled (#5548)
Contributed by Steve Loughran.
2023-04-27 10:59:46 +01:00
Sebastian Baunsgaard
6aac6cb212
HADOOP-18660. Filesystem Spelling Mistake (#5475). Contributed by Sebastian Baunsgaard.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-04-25 21:44:04 +05:30
Viraj Jasani
0e3aafe6c0
HADOOP-18399. S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator (#5054)
Contributed by Viraj Jasani
2023-04-18 16:37:48 +01:00
Steve Loughran
6ea10cf41b
HADOOP-18696. ITestS3ABucketExistence arn test failures. (#5557)
Explicitly sets the fs.s3a.endpoint.region to eu-west-1 so
the ARN-referenced fs creation fails with unknown store
rather than IllegalArgumentException.

Steve Loughran
2023-04-17 10:18:33 +01:00
Steve Loughran
7c3d94a032
HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks (#5543)
Contributed By: HarshitGupta and Steve Loughran
2023-04-12 05:17:45 +05:30
HarshitGupta11
dfb2ca0a64
HADOOP-18684. S3A filesystem to support binding to to other URI schemes (#5521)
Contributed by Harshit Gupta
2023-04-05 12:42:11 +01:00
Masatake Iwasaki
7c42d0f7da
HADOOP-17746. Compatibility table in directory_markers.md doesn't render right. (#3116)
Contributed by Masatake Iwasaki
2023-03-15 17:10:42 +00:00
Ankit Saurabh
22f6d55b71
HADOOP-18246. Reduce lower limit on fs.s3a.prefetch.block.size to 1 byte. (#5120)
The minimum value of fs.s3a.prefetch.block.size is now 1

Contributed by Ankit Saurabh
2023-02-02 18:45:21 +00:00
Nikita Eshkeev
4de31123ce
Fix "the the" and friends typos (#5267)
Signed-off-by: Nikita Eshkeev <neshkeev@yandex.ru>
2023-01-17 03:33:59 +08:00
ahmarsuhail
9c6eeb699e
HADOOP-18320. Fixes typos in Delegation Tokens documentation. (#4499)
Contributed By: Ahmar Suhail
2023-01-09 22:18:41 +05:30
Steve Loughran
aaf92fe183
HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references (#5144)
This has triggered an OOM in a process which was churning through s3a fs
instances; the increased memory footprint of IOStatistics amplified what
must have been a long-standing issue with FS instances being created
and not closed()

*  Makes sure instrumentation is closed when the FS is closed.
*  Uses a weak reference from metrics to instrumentation, so even
   if the FS wasn't closed (see HADOOP-18478), this back reference
   would not cause the S3AInstrumentation reference to be retained.
*  If S3AFileSystem is configured to log at TRACE it will log the
   calling stack of initialize(), so help identify where the
   instance is being created. This should help track down
   the cause of instance leakage.

Contributed by Steve Loughran.
2022-12-14 18:21:03 +00:00
Steve Loughran
1cecf8ab70
HADOOP-18183. s3a audit logs to publish range start/end of GET requests. (#5110)
The start and end of the range is set in a new audit param "rg",
e.g "?rg=100-200"

Contributed by Ankit Saurabh
2022-12-14 14:01:28 +00:00
Oleksandr Shevchenko
0a4528cd7f
HADOOP-18563. Misleading AWS SDK S3 timeout configuration comment (#5197)
Contributed by Oleksandr Shevchenko
2022-12-08 15:07:59 +00:00
Ashutosh Gupta
2c1158e858
HADOOP-18531. Fix assertion failure in ITestS3APrefetchingInputStream (#5149)
This patch MUST be applied to all branches containing HADOOP-18378
so as to ensure reliable test runs.

Contributed by Ashutosh Gupta
2022-11-23 17:47:39 +00:00
Daniel Carl Jones
0b577992ef
HADOOP-18482. ITestS3APrefetchingInputStream to skip if CSV test file unavailable (#4983)
Contributed by Danny Jones
2022-10-31 21:19:34 +00:00
sabertiger
af7dd660e0
HADOOP-18233. Possible race condition with TemporaryAWSCredentialsProvider (#5024)
This fixes a race condition with the TemporaryAWSCredentialProvider
one which has existed for a long time but which only surfaced
(usually in Spark) when the bucket existence probe was disabled
by setting fs.s3a.bucket.probe to 0 -a performance speedup
which was made the default in HADOOP-17454.

Contributed by Jimmy Wong.
2022-10-31 12:43:30 +00:00
Mehakmeet Singh
fba46aa5bb
HADOOP-18499. S3A to support HTTPS web proxies (#5051)
The option "fs.s3a.proxy.ssl.enabled" controls
whether the s3a connects to a proxy over HTTP (default) or HTTPS.
Set to "true" to use HTTPS.

Contributed by Mehakmeet Singh
2022-10-26 11:45:20 +01:00
Viraj Jasani
8aa04b0b24
HADOOP-18189 S3APrefetchingInputStream to support status probes when closed (#5036)
Contributed by Viraj Jasani
2022-10-19 14:38:11 +01:00
Daniel Carl Jones
6207ac47e0
HADOOP-18304. Improve user-facing S3A committers documentation (#4478)
Contributed by: Daniel Carl Jones
2022-10-19 12:56:47 +05:30
Steve Loughran
d80db6c9e5
HADOOP-18476. Abfs and S3A FileContext bindings to close wrapped filesystems in finalizer (#4966)
This is to try and close the underlying filesystems when the FileContext APIs are used.
Without this, threads may be leaked
2022-10-18 14:53:02 +01:00
Ankit Saurabh
2d91daab5e
HADOOP-18156. Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc (#4965)
Contributed by Ankit Saurabh
2022-10-17 18:10:47 +01:00
ahmarsuhail
77e551a478
HADOOP-18481. AWS v2 SDK upgrade log to not about standard AWS Credential Providers. (#4973)
The AWS SDKV2 upgrade log no longer warns about instantiation
of the v1 SDK credential providers which are commonly used in
s3a configurations:

* com.amazonaws.auth.EnvironmentVariableCredentialsProvider
* com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper
* com.amazonaws.auth.InstanceProfileCredentialsProvider

When the hadoop-aws module moves to the v2 SDK, references to these
credential providers will be rewritten to their v2 equivalents.

Follow-on to HADOOP-18382. "Upgrade AWS SDK to V2 - Prerequisites"

Contributed by Ahmar Suhail
2022-10-14 10:48:09 +01:00
monthonk
9439d8e4e4
HADOOP-18292. Fix s3 select tests when running against unsupported storage class (#4489)
Follow-on from HADOOP-12020.

Contributed by Monthon Klongklaew
2022-10-13 13:13:36 +01:00
Mukund Thakur
be70bbb4be
HADOOP-18460. checkIfVectoredIOStopped before populating the buffers (#4986)
Contributed by Mukund Thakur
2022-10-10 11:17:45 +01:00
Daniel Carl Jones
7ec762a5fd
HADOOP-18465. Fix S3A SSE test skip when encryption is disabled (#4925)
Contributed by Daniel Carl Jones
2022-10-06 12:42:01 +01:00
Alessandro Passaro
1675a28e5a
HADOOP-18378. Implement lazy seek in S3A prefetching. (#4955)
Make S3APrefetchingInputStream.seek() completely lazy. Calls to seek() will not affect the current buffer nor interfere with prefetching, until read() is called.

This change allows various usage patterns to benefit from prefetching, e.g. when calling readFully(position, buffer) in a loop for contiguous positions the intermediate internal calls to seek() will be noops and prefetching will have the same performance as in a sequential read.

Contributed by Alessandro Passaro.
2022-10-06 12:00:41 +01:00
Mukund Thakur
735e35d648
HADOOP-18347. S3A Vectored IO to use bounded thread pool. (#4918)
part of HADOOP-18103.

Also introducing a config fs.s3a.vectored.active.ranged.reads
to configure the maximum number of number of range reads a
single input stream can have active (downloading, or queued)
to the central FileSystem instance's pool of queued operations.
This stops a single stream overloading the shared thread pool.

Contributed by: Mukund Thakur
2022-09-27 21:13:07 +05:30
Viraj Jasani
084b68e380
HADOOP-18455. S3A prefetching executor should be closed (#4879)
follow-on patch to HADOOP-18186. 

Contributed by: Viraj Jasani
2022-09-22 00:22:41 +05:30
Viraj Jasani
5b1657278c
HADOOP-18377. hadoop-aws build to add a -prefetch profile to run all tests with prefetching (#4914)
Contributed by Viraj Jasani
2022-09-20 10:26:13 +01:00
Mukund Thakur
8732625f50
HADOOP-18439. Fix VectoredIO for LocalFileSystem when checksum is enabled. (#4862)
part of HADOOP-18103.

While merging the ranges in CheckSumFs, they are rounded up based on the
value of checksum bytes size which leads to some ranges crossing the EOF
thus they need to be fixed else it will cause EOFException during actual reads.

Contributed By: Mukund Thakur
2022-09-09 21:46:08 +05:30
Viraj Jasani
56387cce57
HADOOP-18186. s3a prefetching to use SemaphoredDelegatingExecutor for submitting work (#4796)
Contributed by Viraj Jasani
2022-09-09 11:32:20 +01:00
Mehakmeet Singh
03961b10c2
HADOOP-18416. fix ITestS3AIOStatisticsContext test failure (#4806)
Follow on to HADOOP-17461.

Contributed by: Mehakmeet Singh
2022-09-08 21:03:18 +05:30
monthonk
20560401ec
HADOOP-18339. S3A storage class option only picked up when buffering writes to disk. (#4669)
Follow-up to HADOOP-12020 Support configuration of different S3 storage classes; 
S3 storage class is now set when buffering to heap/bytebuffers, and when
creating directory markers

Contributed by Monthon Klongklaew
2022-09-01 18:14:32 +01:00
Mukund Thakur
19830c98bc
HADOOP-18391. Improvements in VectoredReadUtils#readVectored() for direct buffers (#4787)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-08-31 21:41:41 +05:30
Steve Loughran
c69e16b297
HADOOP-18410. S3AInputStream.unbuffer() does not release http connections (#4766)
HADOOP-16202 "Enhance openFile()" added asynchronous draining of the 
remaining bytes of an S3 HTTP input stream for those operations
(unbuffer, seek) where it could avoid blocking the active
thread.

This patch fixes the asynchronous stream draining to work and so
return the stream back to the http pool. Without this, whenever
unbuffer() or seek() was called on a stream and an asynchronous
drain triggered, the connection was not returned; eventually
the pool would be empty and subsequent S3 requests would
fail with the message "Timeout waiting for connection from pool"

The root cause was that even though the fields passed in to drain() were
converted to references through the methods, in the lambda expression
passed in to submit, they were direct references

operation = client.submit(
 () -> drain(uri, streamStatistics,
       false, reason, remaining,
       object, wrappedStream));  /* here */

Those fields were only read during the async execution, at which
point they would have been set to null (or even a subsequent read).

A new SDKStreamDrainer class peforms the draining; this is a Callable
and can be submitted directly to the executor pool.

The class is used in both the classic and prefetching s3a input streams.

Also, calling unbuffer() switches the S3AInputStream from adaptive
to random IO mode; that is, it is considered a cue that future
IO will not be sequential, whole-file reads.

Contributed by Steve Loughran.
2022-08-31 11:16:52 +01:00
ahmarsuhail
7fb9c306e2
HADOOP-18382. AWS SDK v2 upgrade prerequisites (#4698)
This patch prepares the hadoop-aws module for a future
migration to using the v2 AWS SDK (HADOOP-18073)

That upgrade will be incompatible; this patch prepares
for it:
-marks some credential providers and other 
 classes and methods as @deprecated.
-updates site documentation
-reduces the visibility of the s3 client;
 other than for testing, it is kept private to
 the S3AFileSystem class.
-logs some warnings when deprecated APIs are used.

The warning messages are printed only once
per JVM's life. To disable them, set the
log level of org.apache.hadoop.fs.s3a.SDKV2Upgrade
to ERROR
 
Contributed by Ahmar Suhail
2022-08-25 17:36:48 +01:00
Viraj Jasani
c249db80c2
HADOOP-18380. fs.s3a.prefetch.block.size to be read through longBytesOption (#4762)
Contributed by Viraj Jasani.
2022-08-23 10:49:04 +01:00
Viraj Jasani
7f030250b4
HADOOP-18403. Fix FileSystem leak in ITestS3AAWSCredentialsProvider (#4737)
Contributed By: Viraj Jasani
2022-08-19 04:14:43 +05:30
Ashutosh Gupta
d09dd4a0b9
HADOOP-18385. ITestS3ACannedACLs failure; fixed by adding in a span (#4736)
Contributed by Ashutosh Gupta
2022-08-18 13:57:43 +01:00
Steve Loughran
682931a6ac
HADOOP-18028. High performance S3A input stream (#4752)
This is the the preview release of the HADOOP-18028 S3A performance input stream.
It is still stabilizing, but ready to test.

Contains

HADOOP-18028. High performance S3A input stream (#4109)
	Contributed by Bhalchandra Pandit.

HADOOP-18180. Replace use of twitter util-core with java futures (#4115)
	Contributed by PJ Fanning.

HADOOP-18177. Document prefetching architecture. (#4205)
	Contributed by Ahmar Suhail

HADOOP-18175. fix test failures with prefetching s3a input stream (#4212)
 Contributed by Monthon Klongklaew

HADOOP-18231.  S3A prefetching: fix failing tests & drain stream async.  (#4386)

	* adds in new test for prefetching input stream
	* creates streamStats before opening stream
	* updates numBlocks calculation method
	* fixes ITestS3AOpenCost.testOpenFileLongerLength
	* drains stream async
	* fixes failing unit test

	Contributed by Ahmar Suhail

HADOOP-18254. Disable S3A prefetching by default. (#4469)
	Contributed by Ahmar Suhail

HADOOP-18190. Collect IOStatistics during S3A prefetching (#4458)

	This adds iOStatisticsConnection to the S3PrefetchingInputStream class, with
	new statistic names in StreamStatistics.

	This stream is not (yet) IOStatisticsContext aware.

	Contributed by Ahmar Suhail

HADOOP-18379 rebase feature/HADOOP-18028-s3a-prefetch to trunk
HADOOP-18187. Convert s3a prefetching to use JavaDoc for fields and enums.
HADOOP-18318. Update class names to be clear they belong to S3A prefetching
	Contributed by Steve Loughran
2022-08-18 13:53:06 +01:00
Viraj Jasani
d55d76e1e2
HADOOP-18371. S3A FS init to log at debug when fs.s3a.create.storage.class is unset (#4730)
Contributed By: Viraj Jasani
2022-08-16 03:55:58 +05:30
Steve Loughran
906ae5138e
HADOOP-18402. S3A committer NPE in spark job abort (#4735)
JobID.toString() and TaskID.toString() to only be called
when the IDs are not null.

This doesn't surface in MapReduce, but Spark SQL can trigger
in job abort, where it may invok abortJob() with an
incomplete TaskContext.

This patch MUST be applied to branches containing
HADOOP-17833. "Improve Magic Committer Performance."

Contributed by Steve Loughran.
2022-08-15 11:20:59 +01:00
Steve Loughran
eee59a8372
Revert "HADOOP-18402. S3A committer NPE in spark job abort (#4735)"
(managed to commit through the github ui before I'd got the message done)

This reverts commit ad83e95046.
2022-08-15 11:20:36 +01:00
Steve Loughran
ad83e95046
HADOOP-18402. S3A committer NPE in spark job abort (#4735)
jobId.toString() to only be called when the ID isn't null.

this doesn't surface in MR, but spark seems to manage it

Change-Id: I06692ef30a4af510c660d7222292932a8d4b5147
2022-08-15 11:18:47 +01:00
Viraj Jasani
8c9533a0f8
HADOOP-18397. Shutdown AWSSecurityTokenService when its resources are no longer in use (#4722)
Contributed by Viraj Jasani.
2022-08-12 11:59:15 +01:00
Mukund Thakur
b28e4c6904
HADOOP-18392. Propagate vectored s3a input stream stats to file system stats. (#4704)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-08-12 01:42:00 +05:30
huaxiangsun
e9509ac467
HADOOP-18340. deleteOnExit does not work with S3AFileSystem (#4608)
Contributed by Huaxiang Sun
2022-08-11 20:25:13 +01:00
Viraj Jasani
06f0f7db79
HADOOP-18373. IOStatisticsContext tuning (#4705)
The name of the option to enable/disable thread level statistics is
"fs.iostatistics.thread.level.enabled";

There is also an enabled() probe in IOStatisticsContext which can
be used to see if the thread level statistics is active.

Contributed by Viraj Jasani
2022-08-08 10:42:57 +01:00
ahmarsuhail
b5642c5638
HADOOP-18366. ITestS3Select.testSelectSeekFullLandsat is timing out. (#4702)
Reduces size of data read to 1 MB

Contributed by Ahmar Suhail
2022-08-05 14:13:04 +01:00
ahmarsuhail
123d1aa884
HADOOP-18368. Fixes ITestCustomSigner for access point names with '-' (#4634)
Contributed By: Ahmar Suhail <ahmarsu@amazon.co.uk>
2022-08-02 01:49:42 +05:30
Mukund Thakur
a5b12c8010
HADOOP-18227. Add input stream IOStats for vectored IO api in S3A. (#4636)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-07-28 21:57:37 +05:30
Steve Loughran
58ed621304
HADOOP-18344. Upgrade AWS SDK to 1.12.262 (#4637)
Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Contributed by Steve Loughran
2022-07-28 11:29:38 +01:00
ahmarsuhail
c92ff0b4f1
HADOOP-18372. ILoadTestS3ABulkDeleteThrottling failing. (#4642)
Contributed by Ahmar Suhail
2022-07-27 17:19:57 +01:00
Mehakmeet Singh
4c8cd61961
HADOOP-17461. Collect thread-level IOStatistics. (#4352)
This adds a thread-level collector of IOStatistics, IOStatisticsContext,
which can be:
* Retrieved for a thread and cached for access from other
  threads.
* reset() to record new statistics.
* Queried for live statistics through the
  IOStatisticsSource.getIOStatistics() method.
* Queries for a statistics aggregator for use in instrumented
  classes.
* Asked to create a serializable copy in snapshot()

The goal is to make it possible for applications with multiple
threads performing different work items simultaneously
to be able to collect statistics on the individual threads,
and so generate aggregate reports on the total work performed
for a specific job, query or similar unit of work.

Some changes in IOStatistics-gathering classes are needed for 
this feature
* Caching the active context's aggregator in the object's
  constructor
* Updating it in close()

Slightly more work is needed in multithreaded code,
such as the S3A committers, which collect statistics across
all threads used in task and job commit operations.

Currently the IOStatisticsContext-aware classes are:
* The S3A input stream, output stream and list iterators.
* RawLocalFileSystem's input and output streams.
* The S3A committers.
* The TaskPool class in hadoop-common, which propagates
  the active context into scheduled worker threads.

Collection of statistics in the IOStatisticsContext
is disabled process-wide by default until the feature 
is considered stable.

To enable the collection, set the option
fs.thread.level.iostatistics.enabled
to "true" in core-site.xml;
	
Contributed by Mehakmeet Singh and Steve Loughran
2022-07-26 20:41:22 +01:00
ashutoshpant
bac2219e3c
HADOOP-18330. S3AFileSystem removes Path when calling createS3Client (#4572)
Adds a new parameter object in s3ClientCreationParameters that holds 
the full s3a path URI

Contributed by Ashutosh Pant
2022-07-21 10:16:39 +01:00
Mukund Thakur
4d1f6f9b99 HADOOP-18106: Handle memory fragmentation in S3A Vectored IO. (#4445)
part of HADOOP-18103.
Handling memory fragmentation in S3A vectored IO implementation by
allocating smaller user range requested size buffers and directly
filling them from the remote S3 stream and skipping undesired
data in between ranges.
This patch also adds aborting active vectored reads when stream is
closed or unbuffer() is called.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
1408dd89a7 HADOOP-18107 Adding scale test for vectored reads for large file (#4273)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
5db0f34e29 HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads (#3964)
Part of HADOOP-18103.
Introducing fs.s3a.vectored.read.min.seek.size and fs.s3a.vectored.read.max.merged.size
to configure min seek and max read during a vectored IO operation in S3A connector.
These properties actually define how the ranges will be merged. To completely
disable merging set fs.s3a.max.readsize.vectored.read to 0.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
2daf0a814f HADOOP-11867. Add a high-performance vectored read API. (#3904)
part of HADOOP-18103.
Add support for multiple ranged vectored read api in PositionedReadable.
The default iterates through the ranges to read each synchronously,
but the intent is that FSDataInputStream subclasses can make more
efficient readers especially in object stores implementation.

Also added implementation in S3A where smaller ranges are merged and
sliced byte buffers are returned to the readers. All the merged ranged are
fetched from S3 asynchronously.

Contributed By: Owen O'Malley and Mukund Thakur
2022-06-22 17:29:32 +01:00