Commit Graph

1899 Commits

Author SHA1 Message Date
Steve Loughran
476b90f3e5
HADOOP-18965. ITestS3AHugeFilesEncryption failure (#6261)
Followup to:
HADOOP-18850 Enable dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)

Contributed by Steve Loughran
2023-11-24 17:24:12 +00:00
Viraj Jasani
f1e4376626
HADOOP-18959. Use builder for prefetch CachingBlockManager. (#6240) Contributed by Viraj Jasani 2023-11-23 11:07:44 +00:00
Steve Loughran
b108e9e2d8
HADOOP-18969. S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance (#6264)
Add the option to the removeBaseAndBucketOverrides() list
2023-11-21 14:55:13 +00:00
PJ Fanning
f609460bda
HADOOP-18957. Use StandardCharsets.UTF_8 (#6231). Contributed by PJ Fanning.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-11-20 23:44:48 +05:30
Anuj Modi
000a39ba2d
HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations (#6019)
Contributed by Anuj Modi
2023-11-13 19:36:33 +00:00
Anuj Modi
597ceaae3a
HADOOP-18874: [ABFS] Add Server request ID in Exception Messages thrown to the caller. (#6004)
Contributed by Anuj Modi
2023-11-06 20:56:55 +00:00
Steve Loughran
ef7fb64764
HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163)
Add a new option:
fs.s3a.optimized.copy.from.local.enabled

This will enable (default) or disable the
optimized CopyFromLocalOperation upload operation
when copyFromLocalFile() is invoked.

When false the superclass implementation is used; duration
statistics are still collected, though audit span entries
in logs will be for the individual fs operations, not the
overall operation.

Contributed by Steve Loughran
2023-11-06 16:00:56 +00:00
Junfan Zhang
c15fd3b2c0
YARN-11599. Incorrect log4j properties file in SLS sample conf (#6220) Contributed by Junfan Zhang.
Reviewed-by: Shilun Fan <slfan1989@apache.org>
Signed-off-by: Shilun Fan <slfan1989@apache.org>
2023-11-05 13:57:48 +08:00
Viraj Jasani
cf3a4b3bb7
HADOOP-18850. S3A: Enable dual-layer server-side encryption with AWS KMS keys (#6140)
Contributed by Viraj Jasani
2023-11-01 13:30:35 +00:00
PJ Fanning
a079f6261d
HADOOP-18917. Addendum. Fix deprecation issues after commons-io upgrade. (#6228). Contributed by PJ Fanning. 2023-10-30 09:35:02 +05:30
Junfan Zhang
e4eda40ac9
YARN-11597. Fix NPE when loading static files in SLSWebApp (#6216) Contributed by Junfan Zhang.
Reviewed-by: Shilun Fan <slfan1989@apache.org>
Signed-off-by: Shilun Fan <slfan1989@apache.org>
2023-10-27 22:11:01 +08:00
Steve Loughran
7ec636deec
HADOOP-18930. Make fs.s3a.create.performance a bucket-wide setting. (#6168)
If fs.s3a.create.performance is set on a bucket
- All file overwrite checks are skipped, even if the caller says
  otherwise.
- All directory existence checks are skipped.
- Marker deletion is *always* skipped.

This eliminates a HEAD and a LIST for every creation.

* New path capability "fs.s3a.create.performance.enabled" true
  if the option is enabled.
* Parameterize ITestS3AContractCreate to expect the different
  outcomes
* Parameterize ITestCreateFileCost similarly, with
  changed cost assertions there.
* create(/) raises an IOE. existing bug only noticed here.

Contributed by Steve Loughran
2023-10-27 12:23:55 +01:00
Steve Loughran
8bd1f65efc
HADOOP-18948. S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete (#6218)
S3A directory delete and rename will optionally abort all pending multipart uploads
in their under their to-be-deleted paths when.

fs.s3a.directory.operations.purge.upload is true

It is off by default.

The filesystems hasPathCapability("fs.s3a.directory.operations.purge.upload")
probe will return true when this feature is enabled.

Multipart uploads may accrue from interrupted data writes, uncommitted 
staging/magic committer jobs and other operations/applications. On AWS S3
lifecycle rules are the recommended way to clean these; this change improves
support for stores which lack these rules.

Contributed by Steve Loughran
2023-10-25 17:39:16 +01:00
Steve Loughran
8b974bcc1f
HADOOP-18889. Third party storage followup. (#6186)
Followup to HADOOP-18889 third party store support;

Fix some minor review comments which came in after the merge.
2023-10-24 18:17:52 +01:00
Steve Loughran
3e0fcda7a5
HADOOP-18945. S3A. IAMInstanceCredentialsProvider failing. (#6202)
This restores asynchronous retrieval/refresh of any AWS credentials provided by the
EC2 instance/container in which the process is running.

Contributed by Steve Loughran
2023-10-23 14:24:30 +01:00
Viraj Jasani
acaf8ef3ca
HADOOP-18918. ITestS3GuardTool fails if SSE/DSSE encryption is used (#6165)
HADOOP-18918. ITestS3GuardTool fails if SSE/DSSE encryption is used.

Contributed by Viraj Jasani.
2023-10-20 10:47:44 +01:00
Steve Loughran
215cb15beb
HADOOP-18946. TestErrorTranslation failure (#6205)
Fixes TestErrorTranslation.testMultiObjectExceptionFilledIn() failure
which came in with HADOOP-18939.

Contributed by Steve Loughran
2023-10-20 10:13:05 +01:00
Steve Loughran
e0563fed50
HADOOP-18908. Improve S3A region handling. (#6187)
S3A region logic improved for better inference and
to be compatible with previous releases

1. If you are using an AWS S3 AccessPoint, its region is determined
   from the ARN itself.
2. If fs.s3a.endpoint.region is set and non-empty, it is used.
3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, 
   the region is determined by by parsing the URL 
   Note: vpce endpoints are not handled by this.
4. If fs.s3a.endpoint.region==null, and none could be determined
   from the endpoint, use us-east-2 as default.
5. If fs.s3a.endpoint.region=="" then it is handed off to
   The default AWS SDK resolution process.

Consult the AWS SDK documentation for the details on its resolution
process, knowing that it is complicated and may use environment variables,
entries in ~/.aws/config, IAM instance information within
EC2 deployments and possibly even JSON resources on the classpath.
Put differently: it is somewhat brittle across deployments.

Contributed by Ahmar Suhail
2023-10-17 15:37:36 +01:00
Steve Loughran
e5eb404bb3
HADOOP-18939. NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry() (#6193)
MultiObjectDeleteException to fill in the error details

See also: https://github.com/aws/aws-sdk-java-v2/issues/4600

Contributed by Steve Loughran
2023-10-17 15:17:16 +01:00
Steve Loughran
81edbebdd8
HADOOP-18889. S3A v2 SDK third party support (#6141)
Tune AWS v2 SDK changes based on testing with third party stores
including GCS. 

Contains HADOOP-18889. S3A v2 SDK error translations and troubleshooting docs

* Changes needed to work with multiple third party stores
* New third_party_stores document on how to bind to and test
  third party stores, including google gcs (which works!)
* Troubleshooting docs mostly updated for v2 SDK

Exception translation/resilience

* New AWSUnsupportedFeatureException for unsupported/unavailable errors
* Handle 501 method unimplemented as one of these
* Error codes > 500 mapped to the AWSStatus500Exception if no explicit
  handler.
* Precondition errors handled a bit better
* GCS throttle exception also recognized.
* GCS raises 404 on a delete of a file which doesn't exist: swallow it.
* Error translation uses reflection to create IOE of the right type.
  All IOEs at the bottom of an AWS stack chain are regenerated.
  then a new exception of that specific type is created, with the top level ex
  its cause. This is done to retain the whole stack chain.
* Reduce the number of retries within the AWS SDK
* And those of s3a code.
* S3ARetryPolicy explicitly declare SocketException as connectivity failure
  but subclasses BindException
* SocketTimeoutException also considered connectivity  
* Log at debug whenever retry policies looked up
* Reorder exceptions to alphabetical order, with commentary
* Review use of the Invoke.retry() method 

 The reduction in retries is because its clear when you try to create a bucket
 which doesn't resolve that the time for even an UnknownHostException to
 eventually fail over 90s, which then hit the s3a retry code.
 - Reducing the SDK retries means these escalate to our code better.
 - Cutting back on our own retries makes it a bit more responsive for most real
 deployments.
 - maybeTranslateNetworkException() and s3a retry policy means that
   unknown host exception is recognised and fails fast.

Contributed by Steve Loughran
2023-10-12 17:47:44 +01:00
Anuj Modi
594e9f29f5
HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003)
Contributed by Anuj Modi
2023-10-09 20:05:23 +01:00
Steve Loughran
882378c3e9
Revert "HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003)"
This reverts commit 6c6df40d35.

...so as to give the correct credit
2023-10-09 20:05:07 +01:00
Anuj Modi
6c6df40d35
HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003)
Contributed by  Anmol Asrani
2023-10-09 20:01:56 +01:00
Anmol Asrani
9c621fcea7
HADOOP-18861. ABFS: Fix failing tests for CPK (#5979)
Contributed by Anmol Asrani
2023-10-09 17:40:15 +01:00
Anmol Asrani
666af58700
HADOOP-18876. ABFS: Change default for fs.azure.data.blocks.buffer to bytebuffer (#6009)
The default value for fs.azure.data.blocks.buffer is changed from "disk" to "bytebuffer"

This will speed up writing to azure storage, at the risk of running out of memory
-especially if there are many threads writing to abfs at the same time and the
upload bandwidth is limited.

If jobs do run out of memory writing to abfs, change the option back to "disk"

Contributed by Anmol Asrani
2023-10-09 16:51:12 +01:00
PJ Fanning
57100bba1b
HADOOP-18917. Addendum: Upgrade to commons-io 2.14.0 (#6152). Contributed by PJ Fanning
Co-authored-by: Ayush Saxena <ayushsaxena@apache.org>
2023-10-06 09:40:32 +05:30
Anmol Asrani
ababe3d9b0
HADOOP-18875. ABFS: Add sendMs and recvMs information for each AbfsHttpOperation by default. (#6008)
Contributed By: Anmol Asrani
2023-10-04 13:55:03 -05:00
Viraj Jasani
27cb551821
HADOOP-18829. S3A prefetch LRU cache eviction metrics (#5893)
Contributed by: Viraj Jasani
2023-09-21 14:31:44 +05:30
Syed Shameerur Rahman
5512c9f924
HADOOP-18797. Support Concurrent Writes With S3A Magic Committer (#6006)
Jobs which commit their work to S3 thr
magic committer now use a unique magic
containing the job ID:
 __magic_job-${jobid}

This allows for multiple jobs to write
to the same destination simultaneously.

Contributed by Syed Shameerur Rahman
2023-09-20 11:26:42 +01:00
Pranav Saxena
f24b73e5f3
HADOOP-18873. ABFS: AbfsOutputStream doesnt close DataBlocks object. (#6010)
AbfsOutputStream to close the dataBlock object created for the upload.

Contributed By: Pranav Saxena
2023-09-20 14:24:36 +05:30
Steve Loughran
120620c1b7
HADOOP-18888. S3A. createS3AsyncClient() always enables multipart uploads (#6056)
* The multipart flag fs.s3a.multipart.uploads.enabled is passed to the async client created
* s3A connector bypasses the transfer manager entirely if disabled or for small files.

Contributed by Steve Loughran
2023-09-15 15:45:09 +01:00
PJ Fanning
56b928b86f
YARN-11498. Add exclusion for jettison everywhere jersey-json is loaded (#5786)
All uses  of jersey-json in the yarn and other hadoop modules now
exclude the obsolete org.codehaus.jettison/jettison and so avoid
all security issues which can come from the library.

Contributed by PJ Fanning
2023-09-13 18:10:24 +01:00
Steve Loughran
81d90fd65b
HADOOP-18073. S3A: Upgrade AWS SDK to V2 (#5995)
This patch migrates the S3A connector to use the V2 AWS SDK.

This is a significant change at the source code level.
Any applications using the internal extension/override points in
the filesystem connector are likely to break.

This includes but is not limited to:
- Code invoking methods on the S3AFileSystem class
  which used classes from the V1 SDK.
- The ability to define the factory for the `AmazonS3` client, and
  to retrieve it from the S3AFileSystem. There is a new factory
  API and a special interface S3AInternals to access a limited
  set of internal classes and operations.
- Delegation token and auditing extensions.
- Classes trying to integrate with the AWS SDK.

All standard V1 credential providers listed in the option 
fs.s3a.aws.credentials.provider will be automatically remapped to their
V2 equivalent.

Other V1 Credential Providers are supported, but only if the V1 SDK is
added back to the classpath.  

The SDK Signing plugin has changed; all v1 signers are incompatible.
There is no support for the S3 "v2" signing algorithm.

Finally, the aws-sdk-bundle JAR has been replaced by the shaded V2
equivalent, "bundle.jar", which is now exported by the hadoop-aws module.

Consult the document aws_sdk_upgrade for the full details.

Contributed by Ahmar Suhail + some bits by Steve Loughran
2023-09-11 14:30:25 +01:00
Anmol Asrani
01cc6d0bc8
HADOOP-18865. ABFS: Add "100-continue" in userAgent if enabled (#5987)
Contributed by Anmol Asrani
2023-08-31 15:10:04 +01:00
Mukund Thakur
28d190b904
HADOOP-18845. Add ability to configure s3 connection ttl using fs.s3a.connection.ttl (#5948)
Contributed By: Mukund Thakur
2023-08-25 12:23:17 -05:00
Yuting Chen
ce5bc4891f
HADOOP-18328. Add documentation for S3A support on S3 Outposts (#5976)
Contributed by Yuting Chen
2023-08-24 10:16:10 +01:00
suzu
70b6c155bc
HADOOP-18328. S3A to support S3 on Outposts (#4533)
Contributed by Sotetsu Suzugamine
2023-08-23 11:38:07 +01:00
Anuj Modi
ba32ea70fd
HADOOP-18826. [ABFS] Fix for GetFileStatus("/") failure. (#5909)
Contributed by Anmol Asrani
2023-08-08 19:00:02 +01:00
Sadanand Shenoy
b971222372
HDFS-17120. Support snapshot diff based copylisting for flat paths. (#5885) 2023-07-27 00:53:57 -07:00
ahmarsuhail
24f5f708df
HADOOP-18778. Fixes failing tests when CSE is enabled. (#5763)
Contributed By: Ahmar Suhail <ahmarsu@amazon.co.uk>
2023-07-26 21:56:49 +05:30
Viraj Jasani
90793e1bce
HADOOP-18805. S3A prefetch tests to work with small files (#5851)
Contributed by Viraj Jasani
2023-07-24 19:36:57 +01:00
Steve Loughran
104a323f6f
HADOOP-18795. S3A DelegationToken plugin to expand return type of binding (#5821)
Adds a class DelegationBindingInfo which contains binding info
beyond just the AWS credential list.


The binding class can be expanded when needed. Until then, all existing
implementations will work, as the new method
  DelegationBindingInfo deploy(AbstractS3ATokenIdentifier retrievedIdentifier)
falls back to the original methods.
2023-07-20 11:27:58 +01:00
Moditha Hewasinghage
b6b259066f
HADOOP-18757. S3A Committer only finalizes the commits in a single thread (#5706)
Contributed by Moditha Hewasinghage
2023-07-19 10:03:41 +01:00
Viraj Jasani
e7d74f3d59
HADOOP-18291. S3A prefetch - Implement thread-safe LRU cache for SingleFilePerBlockCache (#5754)
Contributed by Viraj Jasani
2023-07-14 10:21:01 +01:00
Mehakmeet Singh
fac7d26c5d
HADOOP-18781. ABFS backReference passed down to streams to avoid GC closing the FS. (#5780)
To avoid the ABFS instance getting closed due to GC while the streams are working, attach the ABFS instance to a backReference opaque object and passing down to the streams so that we have a hard reference while the streams are working. 

Contributed by: Mehakmeet Singh
2023-07-11 17:57:05 +05:30
Harunobu Daikoku
e3683a954f
HADOOP-18793. S3A StagingCommitter does not clean up staging-uploads directory (#5818)
Contributed by Harunobu Daikoku
2023-07-08 12:53:54 +01:00
Dongjoon Hyun
fb16e00da0
HADOOP-18718. Fix several maven build warnings (#5592). Contributed by Dongjoon Hyun.
Reviewed-by: Gautham B A <gautham.bangalore@gmail.com>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-06-11 11:38:13 +05:30
Steve Loughran
7a45ef4164
MAPREDUCE-7435. Manifest Committer OOM on abfs (#5519)
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.

The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.

The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.

Contributed by Steve Loughran
2023-06-09 17:00:59 +01:00
Steve Loughran
7bb09f1010
HADOOP-18752. Change fs.s3a.directory.marker.retention to "keep" (#5689)
This 
1. changes the default value of fs.s3a.directory.marker.retention
   to "keep"
2. no longer prints a message when an S3A FS instances is
   instantiated with any option other than delete.

Switching to marker retention improves performance
on any S3 bucket as there are no needless marker DELETE requests
-leading to a reduction in write IOPS and and any delays waiting
for the DELETE call to finish.

There are *very* significant improvements on versioned buckets,
where tombstone markers slow down LIST operations: the more
tombstones there are, the worse query planning gets.

Having versioning enabled on production stores is the foundation
of any data protection strategy, so this has tangible benefits
in production.

It is *not* compatible with older hadoop releases; specifically
- Hadoop branch 2 < 2.10.2
- Any release of Hadoop 3.0.x and Hadoop 3.1.x
- Hadoop 3.2.0 and 3.2.1
- Hadoop 3.3.0
Incompatible releases have no problems reading data in stores
where markers are retained, but can get confused when deleting
or renaming directories.

If you are still using older versions to write to data, and cannot
yet upgrade, switch the option back to "delete"

Contributed by Steve Loughran
2023-06-08 12:12:29 +01:00
Ayush Saxena
1d0c9ab433
Revert "HADOOP-18207. Introduce hadoop-logging module (#5503)"
This reverts commit 03a499821c.
2023-06-05 09:34:40 +05:30
Viraj Jasani
03a499821c
HADOOP-18207. Introduce hadoop-logging module (#5503)
Reviewed-by: Duo Zhang <zhangduo@apache.org>
2023-06-02 18:07:34 -07:00
Steve Loughran
e6b54f7f68
Revert "HADOOP-18706. Improve S3ABlockOutputStream recovery (#5563)"
This reverts commit 372631c566.

Reverted due to HADOOP-18744.
2023-05-24 19:22:22 +01:00
Viraj Jasani
bef40e9427
HADOOP-18688. S3A audit header to include count of items in delete ops (#5621)
The auditor-generated http referrer URL now includes the count of keys
to delete in the "ks" query parameter

Contributed by Viraj Jasani
2023-05-16 10:40:16 +01:00
Steve Loughran
e76c09ac3b
HADOOP-18724. Open file fails with NumberFormatException for S3AFileSystem (#5611)
This:

1. Adds optLong, optDouble, mustLong and mustDouble
   methods to the FSBuilder interface to let callers explicitly
   passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
   now only set long values instead, so as to avoid problems
   related to overloaded methods resulting in a ".0" being appended
   to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
   the new methods
4. And the s3a code is resilient to parse errors in is numeric options
   -it will downgrade to the default.

This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.

For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.

Contributed by Steve Loughran
2023-05-11 17:57:25 +01:00
slfan1989
a2dda0ce03
HADOOP-18359. Update commons-cli from 1.2 to 1.5. (#5095). Contributed by Shilun Fan.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-05-10 01:42:12 +05:30
Chris
372631c566
HADOOP-18706. Improve S3ABlockOutputStream recovery (#5563)
Contributed by Chris Bevard
2023-05-05 11:57:42 +01:00
Dongjoon Hyun
27776ac45e
HADOOP-18727. Fix WriteOperations.listMultipartUploads function description (#5613)
Contributed by Dongjoon Hyun
2023-05-04 13:03:48 +01:00
Viraj Jasani
bfcf5dd03b
HADOOP-18697. S3A prefetch: failure of ITestS3APrefetchingInputStream#testRandomReadLargeFile (#5580)
Contributed by Viraj Jasani
2023-05-02 15:21:46 +01:00
Steve Loughran
eb749ddd4d
HADOOP-18695. S3A: reject multipart copy requests when disabled (#5548)
Contributed by Steve Loughran.
2023-04-27 10:59:46 +01:00
Sebastian Baunsgaard
6aac6cb212
HADOOP-18660. Filesystem Spelling Mistake (#5475). Contributed by Sebastian Baunsgaard.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2023-04-25 21:44:04 +05:30
Tamas Domok
05e6dc19ea
HADOOP-18705. ABFS should exclude incompatible credential providers. (#5560)
Contributed by Tamas Domok.
2023-04-24 15:46:40 +01:00
Viraj Jasani
0e3aafe6c0
HADOOP-18399. S3A Prefetch - SingleFilePerBlockCache to use LocalDirAllocator (#5054)
Contributed by Viraj Jasani
2023-04-18 16:37:48 +01:00
Steve Loughran
6ea10cf41b
HADOOP-18696. ITestS3ABucketExistence arn test failures. (#5557)
Explicitly sets the fs.s3a.endpoint.region to eu-west-1 so
the ARN-referenced fs creation fails with unknown store
rather than IllegalArgumentException.

Steve Loughran
2023-04-17 10:18:33 +01:00
Steve Loughran
7c3d94a032
HADOOP-18637. S3A to support upload of files greater than 2 GB using DiskBlocks (#5543)
Contributed By: HarshitGupta and Steve Loughran
2023-04-12 05:17:45 +05:30
Sadanand Shenoy
74ddf69f80
HDFS-16911. Distcp with snapshot diff to support Ozone filesystem. (#5364) 2023-04-10 14:03:16 -07:00
HarshitGupta11
dfb2ca0a64
HADOOP-18684. S3A filesystem to support binding to to other URI schemes (#5521)
Contributed by Harshit Gupta
2023-04-05 12:42:11 +01:00
sreeb-msft
389b3ea6e3
HADOOP-18012. ABFS: Enable config controlled ETag check for Rename idempotency (#5488)
To support recovery of network failures during rename, the abfs client
fetches the etag of the source file, and when recovering from a
failure, uses this tag to determine whether the rename succeeded
before the failure happened.

* This works for files, but not directories
* It adds the overhead of a HEAD request before each rename.
* The option can be disabled by setting "fs.azure.enable.rename.resilience"
  to false

Contributed by Sree Bhattacharyya
2023-03-31 19:15:15 +01:00
Galsza
016362a28b
HADOOP-18548. Hadoop Archive tool (HAR) should acquire delegation tokens from source and destination file systems (#5355)
Signed-off-by: Chris Nauroth <cnauroth@apache.org>
2023-03-30 07:12:02 +08:00
Jinhu Wu
b5e8269d9b
HADOOP-18458: AliyunOSSBlockOutputStream to support heap/off-heap buffer before uploading data to OSS (#4912) 2023-03-28 14:27:01 +08:00
Anmol Asrani
762d3ddb43
HADOOP-18146: ABFS: Added changes for expect hundred continue header (#4039)
This change lets the client react pre-emptively to server load without getting to 503 and the exponential backoff
which follows. This stops performance suffering so much as capacity limits are approached for an account.

Contributed by Anmol Asranii
2023-03-27 12:43:34 +01:00
Pranav Saxena
759ddebb13
HADOOP-18647. x-ms-client-request-id to identify the retry of an API. (#5437)
The x-ms-client-request-id now includes a field to indicate a call is a retry of a previous
operation

Contributed by Pranav Saxena
2023-03-15 20:03:22 +00:00
Masatake Iwasaki
7c42d0f7da
HADOOP-17746. Compatibility table in directory_markers.md doesn't render right. (#3116)
Contributed by Masatake Iwasaki
2023-03-15 17:10:42 +00:00
Pranav Saxena
358bf80c94
HADOOP-18606. ABFS: Add reason in x-ms-client-request-id on a retried API call. (#5299)
Contributed by Pranav Saxena
2023-03-07 17:02:13 +00:00
Steve Loughran
dcd9dc6983
HADOOP-18641. Cloud connector dependency and LICENSE fixup. (#5429)
POM and LICENSE fixup of transient dependencies
* Exclude hadoop-cloud-storage imports which come in with hadoop-common
* Add explicit import of hadoop's org.codehaus.jettison declaration
  to hadoop-aliyun
* Tune aliyun jars imports
* Update LICENSE-binary for the current set of libraries.

Contributed by Steve Loughran
2023-02-28 10:48:54 +00:00
Ayush Saxena
e8a6b2c2c4
HADOOP-18582. Addendum: Skip unnecessary cleanup logic in DistCp. (#5409)
Followup to the original HADOOP-18582.

Temporary path cleanup is re-enabled for -append jobs
as these will create temporary files when creating or overwriting files.

Contributed by Ayush Saxena
2023-02-22 19:29:41 +00:00
Mehakmeet Singh
7a0903b743
HADOOP-18633. fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip (#5401)
Contributed by: Mehakmeet Singh
2023-02-16 10:09:06 +05:30
Viraj Jasani
90de1ff151
HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future (#5315) 2023-02-14 03:24:06 +08:00
Mehakmeet Singh
9e4f50d8a0
HADOOP-18596. Distcp -update to use modification time while checking for file skip. (#5308)
Adding toggleable support for modification time during distcp -update between two stores with incompatible checksum comparison.

Contributed by: Mehakmeet Singh <mehakmeet.singh.behl@gmail.com>
2023-02-09 21:31:09 +05:30
Ankit Saurabh
22f6d55b71
HADOOP-18246. Reduce lower limit on fs.s3a.prefetch.block.size to 1 byte. (#5120)
The minimum value of fs.s3a.prefetch.block.size is now 1

Contributed by Ankit Saurabh
2023-02-02 18:45:21 +00:00
kevin wan
3b7b79b37a
HADOOP-18582. skip unnecessary cleanup logic in distcp (#5251)
Co-authored-by: 万康 <mingge@xiaohongshu.com>
Reviewed-by: Steve Loughran <stevel@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: Chris Nauroth <cnauroth@apache.org>
2023-01-24 15:49:32 -08:00
Nikita Eshkeev
4de31123ce
Fix "the the" and friends typos (#5267)
Signed-off-by: Nikita Eshkeev <neshkeev@yandex.ru>
2023-01-17 03:33:59 +08:00
ahmarsuhail
9c6eeb699e
HADOOP-18320. Fixes typos in Delegation Tokens documentation. (#4499)
Contributed By: Ahmar Suhail
2023-01-09 22:18:41 +05:30
陈爽-Jack Chen
f6605f1b3a
HADOOP-18438: AliyunOSSFileSystemStore deleteObjects interface should return the objects that failed to delete (#4857)
Merged to trunk, thank @chenshuang778  for your contribution
2022-12-20 13:57:49 +08:00
Steve Loughran
33785fc5ad
HADOOP-18577. Followup: javadoc fix (#5232)
Fixes a javadoc error which came with
HADOOP-18577. ABFS: Add probes of readahead fix (#5205)

Part of the HADOOP-18521 ABFS readahead fix; MUST be included.

Contributed by Steve Loughran
2022-12-18 12:19:33 +00:00
Steve Loughran
cf1244492d
HADOOP-18577. ABFS: Add probes of readahead fix (#5205)
Followup patch to  HADOOP-18456 as part of HADOOP-18521,
ABFS ReadBufferManager buffer sharing across concurrent HTTP requests

Add probes of readahead fix aid in checking safety of
hadoop ABFS client across different releases.

* ReadBufferManager constructor logs the fact it is safe at TRACE
* AbfsInputStream declares it is fixed in toString()
  by including fs.azure.capability.readahead.safe" in the
  result.

The ABFS FileSystem hasPathCapability("fs.azure.capability.readahead.safe")
probe returns true to indicate the client's readahead manager has been fixed
to be safe when prefetching.

All Hadoop releases for which probe this returns false
and for which the probe "fs.capability.etags.available"
returns true at risk of returning invalid data when reading
ADLS Gen2/Azure storage data.

Contributed by Steve Loughran.
2022-12-15 17:08:25 +00:00
Steve Loughran
aaf92fe183
HADOOP-18526. Leak of S3AInstrumentation instances via hadoop Metrics references (#5144)
This has triggered an OOM in a process which was churning through s3a fs
instances; the increased memory footprint of IOStatistics amplified what
must have been a long-standing issue with FS instances being created
and not closed()

*  Makes sure instrumentation is closed when the FS is closed.
*  Uses a weak reference from metrics to instrumentation, so even
   if the FS wasn't closed (see HADOOP-18478), this back reference
   would not cause the S3AInstrumentation reference to be retained.
*  If S3AFileSystem is configured to log at TRACE it will log the
   calling stack of initialize(), so help identify where the
   instance is being created. This should help track down
   the cause of instance leakage.

Contributed by Steve Loughran.
2022-12-14 18:21:03 +00:00
Steve Loughran
1cecf8ab70
HADOOP-18183. s3a audit logs to publish range start/end of GET requests. (#5110)
The start and end of the range is set in a new audit param "rg",
e.g "?rg=100-200"

Contributed by Ankit Saurabh
2022-12-14 14:01:28 +00:00
Steve Loughran
0a7dfcc332
HADOOP-18546. Followup: ITestReadBufferManager fix (#5198)
This is a followup to the original HADOOP-18546
patch; cherry-picks of that should include this
or follow up with it.

Removes risk of race conditions in assertions
of ITestReadBufferManager on the state of the in-progress
and completed queues by removing assertions brittle
to race conditions in scheduling/network IO

* Waits for all the executor pool shutdown to complete before
  making any assertions
* Assertions that there are no in progress reads MUST be
  cut as there may be some and they won't be cancelled.
* Assertions that the completed list is without buffers
  of a closed stream are brittle because if there was
  an in progress stream which completed after stream.close()
  then it will end up in the list.

Contributed by Steve Loughran
2022-12-09 13:47:11 +00:00
Oleksandr Shevchenko
0a4528cd7f
HADOOP-18563. Misleading AWS SDK S3 timeout configuration comment (#5197)
Contributed by Oleksandr Shevchenko
2022-12-08 15:07:59 +00:00
Pranav Saxena
c67c2b7569
HADOOP-18546. ABFS. disable purging list of in progress reads in abfs stream close() (#5176)
This addresses HADOOP-18521, "ABFS ReadBufferManager buffer sharing
across concurrent HTTP requests" by not trying to cancel
in progress reads.

It supercedes HADOOP-18528, which disables the prefetching.
If that patch is applied *after* this one, prefetching
will be disabled.

As well as changing the default value in the code,
core-default.xml is updated to set
fs.azure.enable.readahead = true

As a result, if Configuration.get("fs.azure.enable.readahead")
returns a non-null value, then it can be inferred that
it was set in or core-default.xml (the fix is present)
or in core-site.xml (someone asked for it).

Contributed by Pranav Saxena.
2022-12-07 20:15:45 +00:00
Anmol Asrani
7786600744
HADOOP-18457. ABFS: Support account level throttling (#5034)
This allows  abfs request throttling to be shared across all 
abfs connections talking to containers belonging to the same abfs storage
account -as that is the level at which IO throttling is applied.

The option is enabled/disabled in the configuration option 
"fs.azure.account.throttling.enabled";
The default is "true"

Contributed by Anmol Asrani
2022-11-30 13:05:31 +00:00
sreeb-msft
1a7acc403b
HADOOP-18498. ABFS: Remove unwanted ? prefix from SAS Tokens (#5136)
This commit parses SAS Tokens and removes the unwanted prefix of '?' from them, if present.

At present, SAS Tokens are provided to the driver through customer implementations of the SASTokenProvider interface. The SAS token providers should not assume that the token will be the first query parameter in the URIs that communicate with the backend. However, it was observed that certain public interfaces provided by Storage to generate SAS can include the '?' as the first character of the SAS Token, which would ideally be the case when it is the first query parameter. Thus, tokens that contain this prefix will lead to an error in the driver due to a clash of query parameters.

To avoid failures for use of such SAS tokens, after receiving the SAS Token from the provider, the code checks for whether any ? prefix is present or not. If yes, it is removed before further usage of the token. This way, users would not have to manually remove the prefix before passing it on as a configuration.

Contributed by Sree Bhattacharya
2022-11-28 11:38:13 +00:00
Ashutosh Gupta
2c1158e858
HADOOP-18531. Fix assertion failure in ITestS3APrefetchingInputStream (#5149)
This patch MUST be applied to all branches containing HADOOP-18378
so as to ensure reliable test runs.

Contributed by Ashutosh Gupta
2022-11-23 17:47:39 +00:00
Mehakmeet Singh
69e50c7b44
HADOOP-18528. Disable abfs prefetching by default (#5134)
Disables block prefetching on ABFS InputStreams, by setting
fs.azure.enable.readahead to false in core-default.xml and
the matching java constant.

This prevents
HADOOP-18521. ABFS ReadBufferManager buffer sharing across concurrent HTTP requests.

Once a fix for that is committed, this change can be reverted.

Contributed by Mehakmeet Singh.
2022-11-15 14:28:41 +00:00
Steve Loughran
7f9ca101e2
HADOOP-18517. ABFS: Add fs.azure.enable.readahead option to disable readahead (#5103)
* HADOOP-18517. ABFS: Add fs.azure.enable.readahead option to disable readahead

Adds new config option to turn off readahead
* also allows it to be passed in through openFile(),
* extends ITestAbfsReadWriteAndSeek to use the option, including one
  replicated test...that shows that turning it off is slower.

Important: this does not address the critical data corruption issue
HADOOP-18521. ABFS ReadBufferManager buffer sharing across concurrent HTTP requests

What is does do is provide a way to completely bypass the ReadBufferManager.
To mitigate the problem, either fs.azure.enable.readahead needs to be set to false,
or set "fs.azure.readaheadqueue.depth" to 0 -this still goes near the (broken)
ReadBufferManager code, but does't trigger the bug.

For safe reading of files through the ABFS connector, readahead MUST be disabled
or the followup fix to HADOOP-18521 applied

Contributed by Steve Loughran
2022-11-08 11:43:04 +00:00
Daniel Carl Jones
0b577992ef
HADOOP-18482. ITestS3APrefetchingInputStream to skip if CSV test file unavailable (#4983)
Contributed by Danny Jones
2022-10-31 21:19:34 +00:00
Steve Loughran
3b10cb5a3b
HADOOP-18507. VectorIO FileRange type to support a "reference" field (#5076)
Contributed by Steve Loughran
2022-10-31 21:12:13 +00:00
sabertiger
af7dd660e0
HADOOP-18233. Possible race condition with TemporaryAWSCredentialsProvider (#5024)
This fixes a race condition with the TemporaryAWSCredentialProvider
one which has existed for a long time but which only surfaced
(usually in Spark) when the bucket existence probe was disabled
by setting fs.s3a.bucket.probe to 0 -a performance speedup
which was made the default in HADOOP-17454.

Contributed by Jimmy Wong.
2022-10-31 12:43:30 +00:00
Mehakmeet Singh
fba46aa5bb
HADOOP-18499. S3A to support HTTPS web proxies (#5051)
The option "fs.s3a.proxy.ssl.enabled" controls
whether the s3a connects to a proxy over HTTP (default) or HTTPS.
Set to "true" to use HTTPS.

Contributed by Mehakmeet Singh
2022-10-26 11:45:20 +01:00
Sneha Vijayarajan
a996d889ec
HADOOP-17767. ABFS: Update test scripts (#3124)
Contributed by Sneha Vijayarajan
2022-10-20 18:07:04 +01:00
Viraj Jasani
8aa04b0b24
HADOOP-18189 S3APrefetchingInputStream to support status probes when closed (#5036)
Contributed by Viraj Jasani
2022-10-19 14:38:11 +01:00
Daniel Carl Jones
6207ac47e0
HADOOP-18304. Improve user-facing S3A committers documentation (#4478)
Contributed by: Daniel Carl Jones
2022-10-19 12:56:47 +05:30
Steve Loughran
d80db6c9e5
HADOOP-18476. Abfs and S3A FileContext bindings to close wrapped filesystems in finalizer (#4966)
This is to try and close the underlying filesystems when the FileContext APIs are used.
Without this, threads may be leaked
2022-10-18 14:53:02 +01:00
Ankit Saurabh
2d91daab5e
HADOOP-18156. Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc (#4965)
Contributed by Ankit Saurabh
2022-10-17 18:10:47 +01:00
Szilard Nemeth
b0d5182c31 YARN-10680. Revisit try blocks without catch blocks but having finally blocks. Contributed by Susheel Gupta 2022-10-15 21:51:08 +02:00
ahmarsuhail
77e551a478
HADOOP-18481. AWS v2 SDK upgrade log to not about standard AWS Credential Providers. (#4973)
The AWS SDKV2 upgrade log no longer warns about instantiation
of the v1 SDK credential providers which are commonly used in
s3a configurations:

* com.amazonaws.auth.EnvironmentVariableCredentialsProvider
* com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper
* com.amazonaws.auth.InstanceProfileCredentialsProvider

When the hadoop-aws module moves to the v2 SDK, references to these
credential providers will be rewritten to their v2 equivalents.

Follow-on to HADOOP-18382. "Upgrade AWS SDK to V2 - Prerequisites"

Contributed by Ahmar Suhail
2022-10-14 10:48:09 +01:00
monthonk
9439d8e4e4
HADOOP-18292. Fix s3 select tests when running against unsupported storage class (#4489)
Follow-on from HADOOP-12020.

Contributed by Monthon Klongklaew
2022-10-13 13:13:36 +01:00
Mukund Thakur
be70bbb4be
HADOOP-18460. checkIfVectoredIOStopped before populating the buffers (#4986)
Contributed by Mukund Thakur
2022-10-10 11:17:45 +01:00
PJ Fanning
8336b91329
HADOOP-18469. Add secure XML parser factories to XMLUtils (#4940)
Add to XMLUtils a set of methods to create secure XML Parsers/transformers, locking down DTD, schema, XXE exposure.

Use these wherever XML parsers are created.

Contributed by PJ Fanning
2022-10-06 19:30:51 +01:00
Daniel Carl Jones
7ec762a5fd
HADOOP-18465. Fix S3A SSE test skip when encryption is disabled (#4925)
Contributed by Daniel Carl Jones
2022-10-06 12:42:01 +01:00
Alessandro Passaro
1675a28e5a
HADOOP-18378. Implement lazy seek in S3A prefetching. (#4955)
Make S3APrefetchingInputStream.seek() completely lazy. Calls to seek() will not affect the current buffer nor interfere with prefetching, until read() is called.

This change allows various usage patterns to benefit from prefetching, e.g. when calling readFully(position, buffer) in a loop for contiguous positions the intermediate internal calls to seek() will be noops and prefetching will have the same performance as in a sequential read.

Contributed by Alessandro Passaro.
2022-10-06 12:00:41 +01:00
Steve Loughran
38b2ed2151
HADOOP-18442. Remove openstack support (#4855)
Contributed by Steve Loughran
2022-10-06 11:49:38 +01:00
Mukund Thakur
735e35d648
HADOOP-18347. S3A Vectored IO to use bounded thread pool. (#4918)
part of HADOOP-18103.

Also introducing a config fs.s3a.vectored.active.ranged.reads
to configure the maximum number of number of range reads a
single input stream can have active (downloading, or queued)
to the central FileSystem instance's pool of queued operations.
This stops a single stream overloading the shared thread pool.

Contributed by: Mukund Thakur
2022-09-27 21:13:07 +05:30
Viraj Jasani
648071e197
HADOOP-18466. Limit the findbugs suppression IS2_INCONSISTENT_SYNC to S3AFileSystem field (#4926)
Follow-on to HADOOP-18455.

Contributed by Viraj Jasani
2022-09-26 18:56:58 +01:00
Viraj Jasani
084b68e380
HADOOP-18455. S3A prefetching executor should be closed (#4879)
follow-on patch to HADOOP-18186. 

Contributed by: Viraj Jasani
2022-09-22 00:22:41 +05:30
Viraj Jasani
5b1657278c
HADOOP-18377. hadoop-aws build to add a -prefetch profile to run all tests with prefetching (#4914)
Contributed by Viraj Jasani
2022-09-20 10:26:13 +01:00
Mukund Thakur
8732625f50
HADOOP-18439. Fix VectoredIO for LocalFileSystem when checksum is enabled. (#4862)
part of HADOOP-18103.

While merging the ranges in CheckSumFs, they are rounded up based on the
value of checksum bytes size which leads to some ranges crossing the EOF
thus they need to be fixed else it will cause EOFException during actual reads.

Contributed By: Mukund Thakur
2022-09-09 21:46:08 +05:30
Viraj Jasani
56387cce57
HADOOP-18186. s3a prefetching to use SemaphoredDelegatingExecutor for submitting work (#4796)
Contributed by Viraj Jasani
2022-09-09 11:32:20 +01:00
Mehakmeet Singh
03961b10c2
HADOOP-18416. fix ITestS3AIOStatisticsContext test failure (#4806)
Follow on to HADOOP-17461.

Contributed by: Mehakmeet Singh
2022-09-08 21:03:18 +05:30
Sumangala Patki
7bcf853ff4
HADOOP-17873. ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException (#3699)
Successor for the reverted PR #3341, using the hadoop @VisibleForTesting attribute

Contributed by Sumangala Patki
2022-09-06 11:00:52 +01:00
sreeb-msft
c48ed3e96c
HADOOP-18408. ABFS: ITestAbfsManifestCommitProtocol fails on nonHNS configuration (#4758)
ITestAbfsManifestCommitProtocol  to set requireRenameResilience to false for nonHNS configuration  (#4758)

Contributed by Sree Bhattacharyya
2022-09-02 12:33:12 +01:00
monthonk
20560401ec
HADOOP-18339. S3A storage class option only picked up when buffering writes to disk. (#4669)
Follow-up to HADOOP-12020 Support configuration of different S3 storage classes; 
S3 storage class is now set when buffering to heap/bytebuffers, and when
creating directory markers

Contributed by Monthon Klongklaew
2022-09-01 18:14:32 +01:00
Mukund Thakur
19830c98bc
HADOOP-18391. Improvements in VectoredReadUtils#readVectored() for direct buffers (#4787)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-08-31 21:41:41 +05:30
Steve Loughran
c69e16b297
HADOOP-18410. S3AInputStream.unbuffer() does not release http connections (#4766)
HADOOP-16202 "Enhance openFile()" added asynchronous draining of the 
remaining bytes of an S3 HTTP input stream for those operations
(unbuffer, seek) where it could avoid blocking the active
thread.

This patch fixes the asynchronous stream draining to work and so
return the stream back to the http pool. Without this, whenever
unbuffer() or seek() was called on a stream and an asynchronous
drain triggered, the connection was not returned; eventually
the pool would be empty and subsequent S3 requests would
fail with the message "Timeout waiting for connection from pool"

The root cause was that even though the fields passed in to drain() were
converted to references through the methods, in the lambda expression
passed in to submit, they were direct references

operation = client.submit(
 () -> drain(uri, streamStatistics,
       false, reason, remaining,
       object, wrappedStream));  /* here */

Those fields were only read during the async execution, at which
point they would have been set to null (or even a subsequent read).

A new SDKStreamDrainer class peforms the draining; this is a Callable
and can be submitted directly to the executor pool.

The class is used in both the classic and prefetching s3a input streams.

Also, calling unbuffer() switches the S3AInputStream from adaptive
to random IO mode; that is, it is considered a cue that future
IO will not be sequential, whole-file reads.

Contributed by Steve Loughran.
2022-08-31 11:16:52 +01:00
ahmarsuhail
7fb9c306e2
HADOOP-18382. AWS SDK v2 upgrade prerequisites (#4698)
This patch prepares the hadoop-aws module for a future
migration to using the v2 AWS SDK (HADOOP-18073)

That upgrade will be incompatible; this patch prepares
for it:
-marks some credential providers and other 
 classes and methods as @deprecated.
-updates site documentation
-reduces the visibility of the s3 client;
 other than for testing, it is kept private to
 the S3AFileSystem class.
-logs some warnings when deprecated APIs are used.

The warning messages are printed only once
per JVM's life. To disable them, set the
log level of org.apache.hadoop.fs.s3a.SDKV2Upgrade
to ERROR
 
Contributed by Ahmar Suhail
2022-08-25 17:36:48 +01:00
Viraj Jasani
c249db80c2
HADOOP-18380. fs.s3a.prefetch.block.size to be read through longBytesOption (#4762)
Contributed by Viraj Jasani.
2022-08-23 10:49:04 +01:00
Viraj Jasani
7f030250b4
HADOOP-18403. Fix FileSystem leak in ITestS3AAWSCredentialsProvider (#4737)
Contributed By: Viraj Jasani
2022-08-19 04:14:43 +05:30
Ashutosh Gupta
d09dd4a0b9
HADOOP-18385. ITestS3ACannedACLs failure; fixed by adding in a span (#4736)
Contributed by Ashutosh Gupta
2022-08-18 13:57:43 +01:00
Steve Loughran
682931a6ac
HADOOP-18028. High performance S3A input stream (#4752)
This is the the preview release of the HADOOP-18028 S3A performance input stream.
It is still stabilizing, but ready to test.

Contains

HADOOP-18028. High performance S3A input stream (#4109)
	Contributed by Bhalchandra Pandit.

HADOOP-18180. Replace use of twitter util-core with java futures (#4115)
	Contributed by PJ Fanning.

HADOOP-18177. Document prefetching architecture. (#4205)
	Contributed by Ahmar Suhail

HADOOP-18175. fix test failures with prefetching s3a input stream (#4212)
 Contributed by Monthon Klongklaew

HADOOP-18231.  S3A prefetching: fix failing tests & drain stream async.  (#4386)

	* adds in new test for prefetching input stream
	* creates streamStats before opening stream
	* updates numBlocks calculation method
	* fixes ITestS3AOpenCost.testOpenFileLongerLength
	* drains stream async
	* fixes failing unit test

	Contributed by Ahmar Suhail

HADOOP-18254. Disable S3A prefetching by default. (#4469)
	Contributed by Ahmar Suhail

HADOOP-18190. Collect IOStatistics during S3A prefetching (#4458)

	This adds iOStatisticsConnection to the S3PrefetchingInputStream class, with
	new statistic names in StreamStatistics.

	This stream is not (yet) IOStatisticsContext aware.

	Contributed by Ahmar Suhail

HADOOP-18379 rebase feature/HADOOP-18028-s3a-prefetch to trunk
HADOOP-18187. Convert s3a prefetching to use JavaDoc for fields and enums.
HADOOP-18318. Update class names to be clear they belong to S3A prefetching
	Contributed by Steve Loughran
2022-08-18 13:53:06 +01:00
Viraj Jasani
d55d76e1e2
HADOOP-18371. S3A FS init to log at debug when fs.s3a.create.storage.class is unset (#4730)
Contributed By: Viraj Jasani
2022-08-16 03:55:58 +05:30
Steve Loughran
906ae5138e
HADOOP-18402. S3A committer NPE in spark job abort (#4735)
JobID.toString() and TaskID.toString() to only be called
when the IDs are not null.

This doesn't surface in MapReduce, but Spark SQL can trigger
in job abort, where it may invok abortJob() with an
incomplete TaskContext.

This patch MUST be applied to branches containing
HADOOP-17833. "Improve Magic Committer Performance."

Contributed by Steve Loughran.
2022-08-15 11:20:59 +01:00
Steve Loughran
eee59a8372
Revert "HADOOP-18402. S3A committer NPE in spark job abort (#4735)"
(managed to commit through the github ui before I'd got the message done)

This reverts commit ad83e95046.
2022-08-15 11:20:36 +01:00
Steve Loughran
ad83e95046
HADOOP-18402. S3A committer NPE in spark job abort (#4735)
jobId.toString() to only be called when the ID isn't null.

this doesn't surface in MR, but spark seems to manage it

Change-Id: I06692ef30a4af510c660d7222292932a8d4b5147
2022-08-15 11:18:47 +01:00
Viraj Jasani
8c9533a0f8
HADOOP-18397. Shutdown AWSSecurityTokenService when its resources are no longer in use (#4722)
Contributed by Viraj Jasani.
2022-08-12 11:59:15 +01:00
Mukund Thakur
b28e4c6904
HADOOP-18392. Propagate vectored s3a input stream stats to file system stats. (#4704)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-08-12 01:42:00 +05:30
huaxiangsun
e9509ac467
HADOOP-18340. deleteOnExit does not work with S3AFileSystem (#4608)
Contributed by Huaxiang Sun
2022-08-11 20:25:13 +01:00
Viraj Jasani
06f0f7db79
HADOOP-18373. IOStatisticsContext tuning (#4705)
The name of the option to enable/disable thread level statistics is
"fs.iostatistics.thread.level.enabled";

There is also an enabled() probe in IOStatisticsContext which can
be used to see if the thread level statistics is active.

Contributed by Viraj Jasani
2022-08-08 10:42:57 +01:00
ahmarsuhail
b5642c5638
HADOOP-18366. ITestS3Select.testSelectSeekFullLandsat is timing out. (#4702)
Reduces size of data read to 1 MB

Contributed by Ahmar Suhail
2022-08-05 14:13:04 +01:00
ahmarsuhail
123d1aa884
HADOOP-18368. Fixes ITestCustomSigner for access point names with '-' (#4634)
Contributed By: Ahmar Suhail <ahmarsu@amazon.co.uk>
2022-08-02 01:49:42 +05:30
Mukund Thakur
a5b12c8010
HADOOP-18227. Add input stream IOStats for vectored IO api in S3A. (#4636)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-07-28 21:57:37 +05:30
Steve Loughran
58ed621304
HADOOP-18344. Upgrade AWS SDK to 1.12.262 (#4637)
Fixes CVE-2018-7489 in shaded jackson.

+Add more commands in testing.md
 to the CLI tests needed when qualifying
 a release

Contributed by Steve Loughran
2022-07-28 11:29:38 +01:00
ahmarsuhail
c92ff0b4f1
HADOOP-18372. ILoadTestS3ABulkDeleteThrottling failing. (#4642)
Contributed by Ahmar Suhail
2022-07-27 17:19:57 +01:00
Mehakmeet Singh
4c8cd61961
HADOOP-17461. Collect thread-level IOStatistics. (#4352)
This adds a thread-level collector of IOStatistics, IOStatisticsContext,
which can be:
* Retrieved for a thread and cached for access from other
  threads.
* reset() to record new statistics.
* Queried for live statistics through the
  IOStatisticsSource.getIOStatistics() method.
* Queries for a statistics aggregator for use in instrumented
  classes.
* Asked to create a serializable copy in snapshot()

The goal is to make it possible for applications with multiple
threads performing different work items simultaneously
to be able to collect statistics on the individual threads,
and so generate aggregate reports on the total work performed
for a specific job, query or similar unit of work.

Some changes in IOStatistics-gathering classes are needed for 
this feature
* Caching the active context's aggregator in the object's
  constructor
* Updating it in close()

Slightly more work is needed in multithreaded code,
such as the S3A committers, which collect statistics across
all threads used in task and job commit operations.

Currently the IOStatisticsContext-aware classes are:
* The S3A input stream, output stream and list iterators.
* RawLocalFileSystem's input and output streams.
* The S3A committers.
* The TaskPool class in hadoop-common, which propagates
  the active context into scheduled worker threads.

Collection of statistics in the IOStatisticsContext
is disabled process-wide by default until the feature 
is considered stable.

To enable the collection, set the option
fs.thread.level.iostatistics.enabled
to "true" in core-site.xml;
	
Contributed by Mehakmeet Singh and Steve Loughran
2022-07-26 20:41:22 +01:00
ashutoshpant
bac2219e3c
HADOOP-18330. S3AFileSystem removes Path when calling createS3Client (#4572)
Adds a new parameter object in s3ClientCreationParameters that holds 
the full s3a path URI

Contributed by Ashutosh Pant
2022-07-21 10:16:39 +01:00
Ayush Saxena
96f8e5b6f4
HADOOP-15789. DistCp does not clean staging folder if class extends DistCp. Contributed by Lawrence Andrews. (#4534)
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2022-07-08 17:04:20 +05:30
Jinhu Wu
3ec4b932c1
HADOOP-18313: AliyunOSSBlockOutputStream should not mark the temporary file for deletion (#4502)
HADOOP-18313: AliyunOSSBlockOutputStream should not mark the temporary file for deletion. Contributed by wujinhu.
2022-07-06 14:23:46 +08:00
Mehakmeet Singh
823f5ee0d4
HADOOP-18242. ABFS Rename Failure when tracking metadata is in an incomplete state (#4331)
ABFS rename fails intermittently when the Storage-blob tracking
metadata is in an incomplete state. This surfaces as the error code
404 and an error message of "RenameDestinationParentPathNotFound"

To mitigate this issue, when a request fails with this response.
the ABFS client issues a HEAD call on the source file
and then retries the rename operation again

ABFS filesystem statistics track when this occurs with new counters
  rename_recovery
  metadata_incomplete_rename_failures
  rename_path_attempts

This is very rare occurrence and appears to be triggered under certain
heavy load conditions, just as with HADOOP-18163.

Contributed by Mehakmeet Singh.
2022-06-27 19:06:59 +01:00
Mukund Thakur
4d1f6f9b99 HADOOP-18106: Handle memory fragmentation in S3A Vectored IO. (#4445)
part of HADOOP-18103.
Handling memory fragmentation in S3A vectored IO implementation by
allocating smaller user range requested size buffers and directly
filling them from the remote S3 stream and skipping undesired
data in between ranges.
This patch also adds aborting active vectored reads when stream is
closed or unbuffer() is called.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
1408dd89a7 HADOOP-18107 Adding scale test for vectored reads for large file (#4273)
part of HADOOP-18103.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00
Mukund Thakur
5db0f34e29 HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads (#3964)
Part of HADOOP-18103.
Introducing fs.s3a.vectored.read.min.seek.size and fs.s3a.vectored.read.max.merged.size
to configure min seek and max read during a vectored IO operation in S3A connector.
These properties actually define how the ranges will be merged. To completely
disable merging set fs.s3a.max.readsize.vectored.read to 0.

Contributed By: Mukund Thakur
2022-06-22 17:29:32 +01:00