Commit Graph

4972 Commits

Author SHA1 Message Date
Sammi Chen
6fd4fea748
HADOOP-19261. Support force close a DomainSocket for server service (#7057) 2024-09-30 10:06:07 -07:00
Ayush Saxena
3fda243419
HADOOP-19290. Operating on / in ChecksumFileSystem throws NPE. (#7074). Contributed by Ayush Saxena. 2024-09-28 19:35:32 +05:30
Sarveksha Yeshavantha Raju
01401d71ef
HADOOP-19281. MetricsSystemImpl should not print INFO message in CLI (#7071)
Replaced all LOG.info with LOG.debug

Contributed by Sarveksha Yeshavantha Raju
2024-09-27 14:20:11 +01:00
Sadanand Shenoy
49a495803a
HDFS-17381. Distcp of EC files should not be limited to DFS. (#6551)
Contributed by Sadanand Shenoy
2024-09-25 17:54:09 +01:00
Nihal Jain
e602c601dd
HADOOP-15760. Upgrade commons-collections to commons-collections4 (#7006)
This moves Hadoop to Apache commons-collections4.

Apache commons-collections has been removed and is completely banned from the source code.

Contributed by Nihal Jain
2024-09-24 16:50:22 +01:00
Ayush Saxena
28538d628e
HADOOP-19164. Hadoop CLI MiniCluster is broken (#7050). Contributed by Ayush Saxena.
Reviewed-by: Vinayakumar B <vinayakumarb@apache.org>
2024-09-21 21:26:51 +05:30
Doroszlai, Attila
182feb11a0
HADOOP-19277. Files and directories mixed up in TreeScanResults#dump (#7047) 2024-09-17 12:25:57 +02:00
Steve Loughran
ea6e0f7cd5
HADOOP-19221. S3A: Unable to recover from failure of multipart block upload attempt (#6938)
This is a major change which handles 400 error responses when uploading
large files from memory heap/buffer (or staging committer) and the remote S3
store returns a 500 response from a upload of a block in a multipart upload.

The SDK's own streaming code seems unable to fully replay the upload;
at attempts to but then blocks and the S3 store returns a 400 response

    "Your socket connection to the server was not read from or written to
     within the timeout period. Idle connections will be closed.
     (Service: S3, Status Code: 400...)"

There is an option to control whether or not the S3A client itself
attempts to retry on a 50x error other than 503 throttling events
(which are independently processed as before)

Option:  fs.s3a.retry.http.5xx.errors
Default: true

500 errors are very rare from standard AWS S3, which has a five nines
SLA. It may be more common against S3 Express which has lower
guarantees.

Third party stores have unknown guarantees, and the exception may
indicate a bad server configuration. Consider setting
fs.s3a.retry.http.5xx.errors to false when working with
such stores.

Signification Code changes:

There is now a custom set of implementations of
software.amazon.awssdk.http.ContentStreamProvidercontent in
the class org.apache.hadoop.fs.s3a.impl.UploadContentProviders.

These:

* Restart on failures
* Do not copy buffers/byte buffers into new private byte arrays,
  so avoid exacerbating memory problems..

There new IOStatistics for specific http error codes -these are collected
even when all recovery is performed within the SDK.
  
S3ABlockOutputStream has major changes, including handling of
Thread.interrupt() on the main thread, which now triggers and briefly
awaits cancellation of any ongoing uploads.

If the writing thread is interrupted in close(), it is mapped to
an InterruptedIOException. Applications like Hive and Spark must
catch these after cancelling a worker thread.

Contributed by Steve Loughran
2024-09-13 20:02:14 +01:00
Steve Loughran
57e62ae07f
Revert "YARN-11664. Remove HDFS Binaries/Jars Dependency From Yarn (#6631)"
This reverts commit 6c01490f14.
2024-09-05 14:35:50 +01:00
Syed Shameerur Rahman
6c01490f14
YARN-11664. Remove HDFS Binaries/Jars Dependency From Yarn (#6631)
To support YARN deployments in clusters without HDFS
some changes have been made in packaging

* new hadoop-common class org.apache.hadoop.fs.HdfsCommonConstants
* hdfs class org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair moved
  from hdfs-client to hadoop-common
* YARN handlers for DSQuotaExceededException replaced by use of superclass
  ClusterStorageCapacityExceededException.  

Contributed by Syed Shameerur Rahman
2024-09-04 13:26:42 +01:00
Cheng Pan
9486844610
HADOOP-16928. Make javadoc work on Java 17 (#6976)
Contributed by Cheng Pan
2024-09-04 11:50:59 +01:00
zhengchenyu
1655acc5e2
HADOOP-19250. [Addendum] Fix test TestServiceInterruptHandling.testRegisterAndRaise. (#7008)
Contributed by Chenyu Zheng
2024-08-30 12:05:13 +01:00
Ayush Saxena
0837c84a9f
Revert "HADOOP-19231. Add JacksonUtil to manage Jackson classes (#6953)"
This reverts commit fa9bb0d1ac.
2024-08-29 14:42:03 +05:30
dhavalshah9131
33c9ecb652
HADOOP-19249. KMSClientProvider raises NPE with unauthed user (#6984)
KMSClientProvider raises a NullPointerException when an unauthorised user
tries to perform the key operation

Contributed by Dhaval Shah
2024-08-20 14:03:05 +01:00
Steve Loughran
2fd7cf53fa
HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)
Fixes a compilation failure caused by HADOOP-19098

Restore original sortRanges() method signature,
  FileRange[] sortRanges(List<? extends FileRange>)

This ensures that google GCS connector will compile again.
It has also been marked as Stable so it is left alone

The version returning List<? extends FileRange>
has been renamed sortRangeList()

Contributed by Steve Loughran
2024-08-19 19:58:15 +01:00
zhengchenyu
e5b76dc99f
HADOOP-19180. EC: Fix calculation errors caused by special index order (#6813). Contributed by zhengchenyu.
Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Shuyan Zhang <zhangshuyan@apache.org>
2024-08-19 12:40:45 +08:00
PJ Fanning
59dba6e1bd
HADOOP-19134. Use StringBuilder instead of StringBuffer. (#6692). Contributed by PJ Fanning 2024-08-18 21:29:12 +05:30
zhengchenyu
bf804cb64b
HADOOP-19250. Fix test TestServiceInterruptHandling.testRegisterAndRaise (#6987)
Contributed by Chenyu Zheng
2024-08-16 17:16:28 +01:00
PJ Fanning
fa9bb0d1ac
HADOOP-19231. Add JacksonUtil to manage Jackson classes (#6953)
New class org.apache.hadoop.util.JacksonUtil centralizes construction of
Jackson ObjectMappers and JsonFactories.

Contributed by PJ Fanning
2024-08-15 16:44:54 +01:00
Steve Loughran
55a576906d
HADOOP-19131. Assist reflection IO with WrappedOperations class (#6686)
1. The class WrappedIO has been extended with more filesystem operations

- openFile()
- PathCapabilities
- StreamCapabilities
- ByteBufferPositionedReadable

All these static methods raise UncheckedIOExceptions rather than
checked ones.

2. The adjacent class org.apache.hadoop.io.wrappedio.WrappedStatistics
provides similar access to IOStatistics/IOStatisticsContext classes
and operations.

Allows callers to:
* Get a serializable IOStatisticsSnapshot from an IOStatisticsSource or
  IOStatistics instance
* Save an IOStatisticsSnapshot to file
* Convert an IOStatisticsSnapshot to JSON
* Given an object which may be an IOStatisticsSource, return an object
  whose toString() value is a dynamically generated, human readable summary.
  This is for logging.
* Separate getters to the different sections of IOStatistics.
* Mean values are returned as a Map.Pair<Long, Long> of (samples, sum)
  from which means may be calculated.

There are examples of the dynamic bindings to these classes in:

org.apache.hadoop.io.wrappedio.impl.DynamicWrappedIO
org.apache.hadoop.io.wrappedio.impl.DynamicWrappedStatistics

These use DynMethods and other classes in the package
org.apache.hadoop.util.dynamic which are based on the
Apache Parquet equivalents.
This makes re-implementing these in that library and others
which their own fork of the classes (example: Apache Iceberg)

3. The openFile() option "fs.option.openfile.read.policy" has
added specific file format policies for the core filetypes

* avro
* columnar
* csv
* hbase
* json
* orc
* parquet

S3A chooses the appropriate sequential/random policy as a 

A policy `parquet, columnar, vector, random, adaptive` will use the parquet policy for
any filesystem aware of it, falling back to the first entry in the list which
the specific version of the filesystem recognizes

4. New Path capability fs.capability.virtual.block.locations

Indicates that locations are generated client side
and don't refer to real hosts.

Contributed by Steve Loughran
2024-08-14 14:43:00 +01:00
Viraj Jasani
321a6cc55e
HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.performance.flags" for mkdir (#6543)
If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.

This saves the cost of multiple list operations.

Contributed by Viraj Jasani
2024-08-08 17:48:51 +01:00
Masatake Iwasaki
2a50911734
HADOOP-17609. Make SM4 support optional for OpenSSL native code. (#3019)
Reviewed-by: Steve Loughran <stevel@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
2024-08-08 21:03:05 +09:00
Steve Loughran
a5806a9e7b
HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags (#6789)
1. Configuration adds new method `getEnumSet()` to get a set of enum values from
   a configuration string.
   <E extends Enum<E>> EnumSet<E> getEnumSet(String key, Class<E> enumClass, boolean ignoreUnknown)

   Whitespace is ignored, case is ignored and the value "*" is mapped to "all values of the enum".
   If "ignoreUnknown" is true then when parsing, unknown values are ignored.
   This is recommended for forward compatiblity with later versions.

2. This support is implemented in org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
    elsewhere in the hadoop codebase.

3. A new private FlagSet class in hadoop common manages a set of enum flags.

     It implements StreamCapabilities and can be probed for a specific option being set
    (with a prefix)


S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum

* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
  fs.s3a.performance.flags in test setup.

Future performance flags are planned, with different levels of safety
and/or backwards compatibility.

Contributed by Steve Loughran
2024-07-29 11:33:51 +01:00
Raphael Azzolini
4525c7e35e
HADOOP-19197. S3A: Support AWS KMS Encryption Context (#6874)
The new property fs.s3a.encryption.context allow users to specify the AWS KMS Encryption Context to be used in S3A.

The value of the encryption context is a key/value string that will be Base64 encoded and set in the parameter ssekmsEncryptionContext from the S3 client.

Contributed by Raphael Azzolini
2024-07-23 17:09:04 +01:00
Pranav Saxena
b60497ff41
HADOOP-19120. ApacheHttpClient adaptation in ABFS. (#6633)
Apache httpclient 4.5.x is the new default implementation of http connections;
this supports a large configurable pool of connections along with
the ability to limit their lifespan.

The networking library can be chosen using the configuration
option fs.azure.networking.library

The supported values are
- APACHE_HTTP_CLIENT : Use Apache HttpClient [Default]
- JDK_HTTP_URL_CONNECTION : Use JDK networking library

Important: unless the networking library is switched back to
the JDK, the apache httpcore and httpclient must be on the classpath

Contributed by Pranav Saxena
2024-07-22 19:03:51 +01:00
fuchaohong
1577f57d4c
HADOOP-19228. ShellCommandFencer#setConfAsEnvVars should also replace '-' with '_'. (#6936). Contributed by fuchaohong.
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-07-20 16:13:33 +08:00
Tsz-Wo Nicholas Sze
9dad697dbc
HADOOP-19227. ipc.Server accelerate token negotiation only for the default mechanism. (#6949) 2024-07-20 15:18:22 +08:00
Viraj Jasani
1360c7574a
HADOOP-19218 Avoid DNS lookup while creating IPC Connection object (#6916). Contributed by Viraj Jasani.
Signed-off-by: Rushabh Shah <shahrs87@apache.org>
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-07-16 21:08:41 +08:00
gavin.wang
783a852029
HDFS-17555. Fix NumberFormatException of NNThroughputBenchmark when configured dfs.blocksize. (#6894). Contributed by wangzhongwei
Reviewed-by: He Xiaoqiao <hexiaoqiao@apache.org>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-07-09 13:52:15 +05:30
Steve Loughran
4c55adbb6b
HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)
Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.

S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
  the services.
* Only create the async client as part of the transfer manager creation,
  which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
  in IOStatistics for reporting.

Adds to hadoop common class
  LazyAtomicReference<T> implements CallableRaisingIOE<T>, Supplier<T>
and subclass
  LazyAutoCloseableReference<T extends AutoCloseable>
    extends LazyAtomicReference<T> implements AutoCloseable

These evaluate the Supplier<T>/CallableRaisingIOE<T> they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.

These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.

LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set

This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.

Contributed by Steve Loughran.
2024-07-05 16:38:37 +01:00
hfutatzhanghb
a57105462b
HADOOP-19215. Fix unit tests testSlowConnection and testBadSetup failed in TestRPC. (#6912). Contributed by farmmamba.
Reviewed-by: huhaiyang <huhaiyang926@126.com>
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
2024-07-05 12:11:39 +05:30
Steve Loughran
8ac9c1839a
HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as UncheckedIOExceptions (#6885)
* WrappedIO methods raise UncheckedIOExceptions
*New class org.apache.hadoop.util.functional.FunctionalIO
 with wrap/unwrap and the ability to generate a
 java.util.function.Supplier around a CallableRaisingIOE.

Contributed by Steve Loughran
2024-06-19 18:47:29 +01:00
Steve Loughran
56c8aa5f1c
HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)
- restore old outcome: no-op
- test this
- update spec

This is a critical fix for vector IO and MUST be cherrypicked to all branches with
that feature

Contributed by Steve Loughran
2024-06-19 12:05:24 +01:00
Fateh Singh
90024d8cb1
HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful for testing auth frameworks such as Ranger (#6677) 2024-06-18 13:52:24 +01:00
Steve Loughran
2d5fa9e016
HADOOP-18508. S3A: Support parallel integration test runs on same bucket (#5081)
It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.

This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.

Example:

mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2

- Root tests must be be disabled to stop them cleaning up
  the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
  of the output of any interrupted test suites.  

Contributed by Steve Loughran
2024-06-14 19:34:52 +01:00
Viraj Jasani
240fddcf17
HADOOP-18931. FileSystem.getFileSystemClass() to log the jar the .class came from (#6197)
Set the log level of logger org.apache.hadoop.fs.FileSystem to DEBUG to see this.

Contributed by Viraj Jasani
2024-06-14 19:14:54 +01:00
Cheng Pan
2bde5ccb81
HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)
Updates the documentation to be consistent with the logging.

Contributed by Cheng Pan
2024-06-14 19:05:27 +01:00
Mukund Thakur
06dd3bfee8
HADOOP-19196. Allow base path to be deleted as well using Bulk Delete. (#6872)
Contributed by: Mukund Thakur
2024-06-11 14:06:53 -05:00
Yu Zhang
f1e2ceb823
HDFS-13603: Do not propagate ExecutionException while initializing EDEK queues for keys. (#6860) 2024-06-03 09:10:06 -07:00
Steve Loughran
d00b3acd5e
HADOOP-18679. Followup: change method name case (#6854)
WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()

Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.

Contributed by Steve Loughran
2024-05-30 19:34:30 +01:00
Mukund Thakur
d107931fc7
HADOOP-19188. Fix TestHarFileSystem and TestFilterFileSystem failing after bulk delete API got added. (#6848)
Follow up to: HADOOP-18679 Add API for bulk/paged delete of files and objects

Contributed by Mukund Thakur
2024-05-29 17:27:09 +01:00
刘斌
6c08e8e2aa
HADOOP-19156. ZooKeeper based state stores use different ZK address configs. (#6767). Contributed by liu bin.
Signed-off-by: Ayush Saxena <ayushsaxena@apache.org>
Signed-off-by: He Xiaoqiao <hexiaoqiao@apache.org>
2024-05-29 20:44:36 +08:00
Sebb
f11a8cfa6e
HADOOP-13147. Constructors must not call overrideable methods in PureJavaCrc32C (#6408). Contributed by Sebb. 2024-05-21 00:08:08 +05:30
Mukund Thakur
47be1ab3b6
HADOOP-18679. Add API for bulk/paged delete of files (#6726)
Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.

This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.

Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.

The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.

To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.

Contributed by Mukund Thakur and Steve Loughran
2024-05-20 17:05:25 +01:00
skyskyhu
3c00093cb5
HADOOP-19167 Bug Fix: Change of Codec configuration does not work (#6807) 2024-05-17 10:27:39 +08:00
Vikas Kumar
f8dce6c501
HADOOP-18851. Performance improvement for DelegationTokenSecretManager (#6803) 2024-05-16 12:30:52 +08:00
Christopher Tubbs
2e77b7b02c
[HADOOP-18786] Use CDN instead of ASF archive (#5789)
* Use Yetus 0.14.1 from downloads.apache.org in yetus-wrapper
* Use Maven 3.8.8 from downloads.apache.org in Win 10 Dockerfile
* Point users to downloads.apache.org for JVSC
* Use Solr 8.11.2 from downloads.apache.org in YARN Dockerfile

Contributed by Christopher Tubbs
2024-05-14 20:09:52 +01:00
zhihui wang
39dee8ea19
HADOOP-18958. Improve UserGroupInformation debug log. (#6255)
Contributed by zhihui wang
2024-05-14 20:03:49 +01:00
Tsz-Wo Nicholas Sze
bda7045070
HADOOP-19152. Do not hard code security providers. (#6739) 2024-05-14 11:19:57 -07:00
zhengchenyu
4cb4d5dd08
HADOOP-19170. Fixes compilation issues on non-Linux systems (#6822)
Reviewed-by: Steve Loughran <stevel@apache.org>
Reviewed-by: Wei-Chiu Chuang <weichiu@apache.org>
2024-05-13 20:04:01 -07:00