Commit Graph

5902 Commits

Author SHA1 Message Date
Viraj Jasani
321a6cc55e
HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.performance.flags" for mkdir (#6543)
If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.

This saves the cost of multiple list operations.

Contributed by Viraj Jasani
2024-08-08 17:48:51 +01:00
Masatake Iwasaki
2a50911734
HADOOP-17609. Make SM4 support optional for OpenSSL native code. (#3019)
Reviewed-by: Steve Loughran <stevel@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
2024-08-08 21:03:05 +09:00
Cheng Pan
59d5e0bb2e
HADOOP-19244. Pullout arch-agnostic maven javadoc plugin configurations in hadoop-common (#6970) Contributed by Cheng Pan.
Reviewed-by: Steve Loughran <stevel@apache.org>
Signed-off-by: Shilun Fan <slfan1989@apache.org>
2024-08-05 15:30:36 +08:00
Steve Loughran
a5806a9e7b
HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags (#6789)
1. Configuration adds new method `getEnumSet()` to get a set of enum values from
   a configuration string.
   <E extends Enum<E>> EnumSet<E> getEnumSet(String key, Class<E> enumClass, boolean ignoreUnknown)

   Whitespace is ignored, case is ignored and the value "*" is mapped to "all values of the enum".
   If "ignoreUnknown" is true then when parsing, unknown values are ignored.
   This is recommended for forward compatiblity with later versions.

2. This support is implemented in org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
    elsewhere in the hadoop codebase.

3. A new private FlagSet class in hadoop common manages a set of enum flags.

     It implements StreamCapabilities and can be probed for a specific option being set
    (with a prefix)


S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum

* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
  fs.s3a.performance.flags in test setup.

Future performance flags are planned, with different levels of safety
and/or backwards compatibility.

Contributed by Steve Loughran
2024-07-29 11:33:51 +01:00
Raphael Azzolini
4525c7e35e
HADOOP-19197. S3A: Support AWS KMS Encryption Context (#6874)
The new property fs.s3a.encryption.context allow users to specify the AWS KMS Encryption Context to be used in S3A.

The value of the encryption context is a key/value string that will be Base64 encoded and set in the parameter ssekmsEncryptionContext from the S3 client.

Contributed by Raphael Azzolini
2024-07-23 17:09:04 +01:00
Pranav Saxena
b60497ff41
HADOOP-19120. ApacheHttpClient adaptation in ABFS. (#6633)
Apache httpclient 4.5.x is the new default implementation of http connections;
this supports a large configurable pool of connections along with
the ability to limit their lifespan.

The networking library can be chosen using the configuration
option fs.azure.networking.library

The supported values are
- APACHE_HTTP_CLIENT : Use Apache HttpClient [Default]
- JDK_HTTP_URL_CONNECTION : Use JDK networking library

Important: unless the networking library is switched back to
the JDK, the apache httpcore and httpclient must be on the classpath

Contributed by Pranav Saxena
2024-07-22 19:03:51 +01:00
fuchaohong
1577f57d4c
HADOOP-19228. ShellCommandFencer#setConfAsEnvVars should also replace '-' with '_'. (#6936). Contributed by fuchaohong.
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-07-20 16:13:33 +08:00
Tsz-Wo Nicholas Sze
9dad697dbc
HADOOP-19227. ipc.Server accelerate token negotiation only for the default mechanism. (#6949) 2024-07-20 15:18:22 +08:00
Viraj Jasani
1360c7574a
HADOOP-19218 Avoid DNS lookup while creating IPC Connection object (#6916). Contributed by Viraj Jasani.
Signed-off-by: Rushabh Shah <shahrs87@apache.org>
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-07-16 21:08:41 +08:00
gavin.wang
783a852029
HDFS-17555. Fix NumberFormatException of NNThroughputBenchmark when configured dfs.blocksize. (#6894). Contributed by wangzhongwei
Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-07-09 13:52:15 +05:30
Steve Loughran
4c55adbb6b
HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)
Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.

S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
  the services.
* Only create the async client as part of the transfer manager creation,
  which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
  in IOStatistics for reporting.

Adds to hadoop common class
  LazyAtomicReference<T> implements CallableRaisingIOE<T>, Supplier<T>
and subclass
  LazyAutoCloseableReference<T extends AutoCloseable>
    extends LazyAtomicReference<T> implements AutoCloseable

These evaluate the Supplier<T>/CallableRaisingIOE<T> they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.

These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.

LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set

This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.

Contributed by Steve Loughran.
2024-07-05 16:38:37 +01:00
hfutatzhanghb
a57105462b
HADOOP-19215. Fix unit tests testSlowConnection and testBadSetup failed in TestRPC. (#6912). Contributed by farmmamba.
Reviewed-by: huhaiyang <huhaiyang926@126.com>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-07-05 12:11:39 +05:30
Steve Loughran
8ac9c1839a
HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as UncheckedIOExceptions (#6885)
* WrappedIO methods raise UncheckedIOExceptions
*New class org.apache.hadoop.util.functional.FunctionalIO
 with wrap/unwrap and the ability to generate a
 java.util.function.Supplier around a CallableRaisingIOE.

Contributed by Steve Loughran
2024-06-19 18:47:29 +01:00
Steve Loughran
56c8aa5f1c
HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)
- restore old outcome: no-op
- test this
- update spec

This is a critical fix for vector IO and MUST be cherrypicked to all branches with
that feature

Contributed by Steve Loughran
2024-06-19 12:05:24 +01:00
Fateh Singh
90024d8cb1
HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful for testing auth frameworks such as Ranger (#6677) 2024-06-18 13:52:24 +01:00
Steve Loughran
2d5fa9e016
HADOOP-18508. S3A: Support parallel integration test runs on same bucket (#5081)
It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.

This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.

Example:

mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2

- Root tests must be be disabled to stop them cleaning up
  the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
  of the output of any interrupted test suites.  

Contributed by Steve Loughran
2024-06-14 19:34:52 +01:00
Viraj Jasani
240fddcf17
HADOOP-18931. FileSystem.getFileSystemClass() to log the jar the .class came from (#6197)
Set the log level of logger org.apache.hadoop.fs.FileSystem to DEBUG to see this.

Contributed by Viraj Jasani
2024-06-14 19:14:54 +01:00
Cheng Pan
2bde5ccb81
HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)
Updates the documentation to be consistent with the logging.

Contributed by Cheng Pan
2024-06-14 19:05:27 +01:00
Mukund Thakur
06dd3bfee8
HADOOP-19196. Allow base path to be deleted as well using Bulk Delete. (#6872)
Contributed by: Mukund Thakur
2024-06-11 14:06:53 -05:00
PJ Fanning
bb30545583
HADOOP-19163. Use hadoop-shaded-protobuf_3_25 (#6858)
Contributed by PJ Fanning
2024-06-11 17:10:00 +01:00
Yu Zhang
f1e2ceb823
HDFS-13603: Do not propagate ExecutionException while initializing EDEK queues for keys. (#6860) 2024-06-03 09:10:06 -07:00
Steve Loughran
d00b3acd5e
HADOOP-18679. Followup: change method name case (#6854)
WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()

Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.

Contributed by Steve Loughran
2024-05-30 19:34:30 +01:00
Mukund Thakur
d107931fc7
HADOOP-19188. Fix TestHarFileSystem and TestFilterFileSystem failing after bulk delete API got added. (#6848)
Follow up to: HADOOP-18679 Add API for bulk/paged delete of files and objects

Contributed by Mukund Thakur
2024-05-29 17:27:09 +01:00
刘斌
6c08e8e2aa
HADOOP-19156. ZooKeeper based state stores use different ZK address configs. (#6767). Contributed by liu bin.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-05-29 20:44:36 +08:00
Sebb
f11a8cfa6e
HADOOP-13147. Constructors must not call overrideable methods in PureJavaCrc32C (#6408). Contributed by Sebb. 2024-05-21 00:08:08 +05:30
Mukund Thakur
47be1ab3b6
HADOOP-18679. Add API for bulk/paged delete of files (#6726)
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.

This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.

Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.

The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.

To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.

Contributed by Mukund Thakur and Steve Loughran
2024-05-20 17:05:25 +01:00
skyskyhu
3c00093cb5
HADOOP-19167 Bug Fix: Change of Codec configuration does not work (#6807) 2024-05-17 10:27:39 +08:00
Vikas Kumar
f8dce6c501
HADOOP-18851. Performance improvement for DelegationTokenSecretManager (#6803) 2024-05-16 12:30:52 +08:00
Christopher Tubbs
2e77b7b02c
[HADOOP-18786] Use CDN instead of ASF archive (#5789)
* Use Yetus 0.14.1 from downloads.apache.org in yetus-wrapper
* Use Maven 3.8.8 from downloads.apache.org in Win 10 Dockerfile
* Point users to downloads.apache.org for JVSC
* Use Solr 8.11.2 from downloads.apache.org in YARN Dockerfile

Contributed by Christopher Tubbs
2024-05-14 20:09:52 +01:00
zhihui wang
39dee8ea19
HADOOP-18958. Improve UserGroupInformation debug log. (#6255)
Contributed by zhihui wang
2024-05-14 20:03:49 +01:00
Tsz-Wo Nicholas Sze
bda7045070
HADOOP-19152. Do not hard code security providers. (#6739) 2024-05-14 11:19:57 -07:00
zhengchenyu
4cb4d5dd08
HADOOP-19170. Fixes compilation issues on non-Linux systems (#6822)
Reviewed-by: Steve Loughran <stevel@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
2024-05-13 20:04:01 -07:00
Felix Nguyen
fb0519253d
HDFS-17488. DN can fail IBRs with NPE when a volume is removed (#6759) 2024-05-11 15:37:43 +08:00
Sammi Chen
43e8ca428e Revert "HADOOP-18851: Performance improvement for DelegationTokenSecretManager. (#6001). Contributed by Vikas Kumar."
This reverts commit e283375cdf.
2024-05-07 13:29:32 +08:00
Tsz-Wo Nicholas Sze
78987a71a6
HADOOP-19151. Support configurable SASL mechanism. (#6740) 2024-04-29 10:02:23 -07:00
zhtttylz
daafc8a0b8
HDFS-17367. Add PercentUsed for Different StorageTypes in JMX (#6735) Contributed by Hualong Zhang.
Signed-off-by: Shilun Fan <slfan1989@apache.org>
2024-04-27 20:36:11 +08:00
Pranav Saxena
6404692c09
HADOOP-19102. [ABFS] FooterReadBufferSize should not be greater than readBufferSize (#6617)
Contributed by  Pranav Saxena
2024-04-22 18:36:12 +01:00
zj619
922c44a339
HADOOP-19130. FTPFileSystem rename with full qualified path broken (#6678). Contributed by shawn
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-04-17 23:12:38 +05:30
PJ Fanning
d194ad0242
HADOOP-19079. HttpExceptionUtils to verify that loaded class is really an exception before instantiation (#6557)
Security hardening

+ Adds new interceptAndValidateMessageContains() method in LambdaTestUtils to verify a list of strings
  can all be found in the toString() value of a raised exception

Contributed by PJ Fanning
2024-04-11 19:38:15 +01:00
Gautham B A
f7bb4f1595
HADOOP-18135. Produce Windows binaries of Hadoop (#6673)
This PR enables one to create the Hadoop
release tarball on Windows, complete with
the native binaries (including winutils.exe).
This PR contains the following changes -

* Prevents splitting during array element
  expansion - this is needed since we need
  to pass the arguments correctly to maven.
* Install Python 3.11.8 and pip to the
  Windows docker image for building
  Hadoop.
* pom file changes to get maven to invoke
  the releasedocmaker script through
  bash.exe on Windows.
2024-04-09 22:15:05 +05:30
Steve Loughran
87fb977777
HADOOP-19098. Vector IO: Specify and validate ranges consistently. #6604
Clarifies behaviour of VectorIO methods with contract tests as well as
specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.
+ ADDENDUM: test fix in ITestS3AContractVectoredRead

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback
implementation

Contributed by Steve Loughran

Change-Id: Ia4ed71864c595f175c275aad83a2ff5741693432
2024-04-03 13:17:52 +01:00
Steve Loughran
b4f9d8e6fa
Revert "HADOOP-19098. Vector IO: Specify and validate ranges consistently."
This reverts commit ba7faf90c8.
2024-04-03 13:15:05 +01:00
Steve Loughran
ba7faf90c8
HADOOP-19098. Vector IO: Specify and validate ranges consistently.
Clarifies behaviour of VectorIO methods with contract tests as well as specification.

* Add precondition range checks to all implementations
* Identify and fix bug where direct buffer reads was broken
  (HADOOP-19101; this surfaced in ABFS contract tests)
* Logging in VectoredReadUtils.
* TestVectoredReadUtils verifies validation logic.
* FileRangeImpl toString() improvements
* CombinedFileRange tracks bytes in range which are wanted;
   toString() output logs this.

HDFS
* Add test TestHDFSContractVectoredRead

ABFS
* Add test ITestAbfsFileSystemContractVectoredRead

S3A
* checks for vector IO being stopped in all iterative
  vector operations, including draining
* maps read() returning -1 to failure
* passes in file length to validation
* Error reporting to only completeExceptionally() those ranges
  which had not yet read data in.
* Improved logging.  

readVectored()
* made synchronized. This is only for the invocation;
  the actual async retrieves are unsynchronized.
* closes input stream on invocation
* switches to random IO, so avoids keeping any long-lived connection around.

+ AbstractSTestS3AHugeFiles enhancements.

Contains: HADOOP-19101. Vectored Read into off-heap buffer broken in fallback implementation

Contributed by Steve Loughran
2024-04-02 20:16:38 +01:00
PJ Fanning
f7d1ec2d9e
HADOOP-19077. Remove use of javax.ws.rs.core.HttpHeaders (#6554). Contributed by PJ Fanning
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-04-01 12:43:39 +05:30
PJ Fanning
06db6289cb
HADOOP-19024. Use bouncycastle jdk18 1.77 (#6410). Contributed 2024-03-30 19:58:12 +05:30
PJ Fanning
97c5a6efba
HADOOP-19041. Use StandardCharsets in more places (#6449) 2024-03-28 23:17:18 -04:00
Viraj Jasani
9fe371aa15
HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals (ADDENDUM) (#6546)
This is a followup to #6406:
HADOOP-18980. S3A credential provider remapping: make extensible

It adds extra validation of key-value pairs in a configuration
option, with tests.

Contributed by Viraj Jasani
2024-03-26 11:18:03 +00:00
Gautham B A
44a249e32a
Exclude files from Apache RAT (#6671) 2024-03-25 09:13:57 -07:00
Alex
5c7e40f910
HADOOP-19111. Removing redundant debug message about client info (#6666). Contributed by Zhongkun Wu.
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-03-25 11:44:33 +08:00
yu liang
55dca911cc
HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time (#6587) Contributed by liangyu.
Signed-off-by: Shilun Fan <slfan1989@apache.org>
2024-03-24 10:14:27 +08:00