Addresses
* CVE-2024-29857 - Importing an EC certificate with specially crafted F2m parameters can cause high CPU usage during parameter evaluation.
* CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due to exception processing eliminated.
* CVE-2024-30172 - Crafted signature and public key can be used to trigger an infinite loop in the Ed25519 verification code.
* CVE-2024-301XX - When endpoint identification is enabled and an SSL socket is not created with an explicit hostname (as happens with HttpsURLConnection), hostname verification could be performed against a DNS-resolved IP address.
Contributed by PJ Fanning
This addresses two CVEs triggered by malformed archives
Important: Denial of Service CVE-2024-25710
Moderate: Denial of Service CVE-2024-26308
Contributed by PJ Fanning
Exclude more artifacts which are dependencies of hadoop-* modules,
with the goal of keeping conflict out of downstream applications.
In particular we have pruned the dependencies of of:
-zookeeper
-other libraries referencing logging
This keeps slf4j-log4j12 and log4j12 off the classpath
of applications importing hadoop-common.
Somehow logback references do still surface; applications
pulling in hadoop-common directly or indirectly should
review their imports carefully.
Contributed by Steve Loughran
Co-authored-by: Wei-Chiu Chuang <weichiu@apache.org>
Includes HADOOP-18354. Upgrade reload4j to 1.22.2 due to XXE vulnerability (#4607).
Log4j 1.2.17 has been replaced by reloadj 1.22.2
SLF4J is at 1.7.36
Cut out S3 Select
* leave public/unstable constants alone
* s3guard tool will fail with error
* s3afs. path capability will fail
* openFile() will fail with specific error
* s3 select doc updated
* Cut eventstream jar
* New test: ITestSelectUnsupported verifies new failure
handling above
Contributed by Steve Loughran
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.
Contributed by Steve Loughran
This addresses
- [sonatype-2021-4916] CWE-327: Use of a Broken or Risky Cryptographic Algorithm
- [sonatype-2019-0673] CWE-400: Uncontrolled Resource Consumption ('Resource Exhaustion')
Contributed by Murali Krishna
With this upgrade, it is possible to connect to an Amazon S3 Express One Zone bucket.
Some tests from the S3A test suite will currently fail against a one zone bucket, as one zone buckets
do not support some S3 standard features (eg: SSE-KMS), and certain operations behave slightly
differently (eg: listMPU will return a directory that has incomplete MPUs).
Contributed by Ahmar Suhail
Followup to the previous HADOOP-18487 patch: changes the scope of
protobuf-2.5 in hadoop-common and elsewhere from "compile" to "provided".
This means that protobuf-2.5 is
* No longer included in hadoop distributions
* No longer exported by hadoop common POM files
* No longer exported transitively by other hadoop modules.
* No longer listed in LICENSE-binary.
Contributed by Steve Loughran
v1 => 1.12.565
v2 => 2.20.160
Only the v2 one is distributed; v1 is needed in deployments only to support v1 credential providers
Contributed by Steve Loughran
Protobuf 2.5 JAR is no longer needed at runtime.
The option common.protobuf.scope defines whether the protobuf 2.5.0
dependency is marked as provided or not.
* New package org.apache.hadoop.ipc.internal for internal only protobuf classes
...with a ShadedProtobufHelper in there which has shaded protobuf refs
only, so guaranteed not to need protobuf-2.5 on the CP
* All uses of org.apache.hadoop.ipc.ProtobufHelper have
been replaced by uses of org.apache.hadoop.ipc.internal.ShadedProtobufHelper
* The scope of protobuf-2.5 is set by the option common.protobuf2.scope
In this patch is it is still "compile"
* There is explicit reference to it in modules where it may be needed.
* The maven scope of the dependency can be set with the common.protobuf2.scope
option. It can be set to "provided" in a build:
-Dcommon.protobuf2.scope=provided
* Add new ipc(callable) method to catch and convert shaded protobuf
exceptions raised during invocation of the supplied lambda expression
* This is adopted in the code where the migration is not traumatically
over-complex. RouterAdminProtocolTranslatorPB is left alone for this
reason.
Contributed by Steve Loughran
All uses of jersey-json in the yarn and other hadoop modules now
exclude the obsolete org.codehaus.jettison/jettison and so avoid
all security issues which can come from the library.
Contributed by PJ Fanning
This patch migrates the S3A connector to use the V2 AWS SDK.
This is a significant change at the source code level.
Any applications using the internal extension/override points in
the filesystem connector are likely to break.
This includes but is not limited to:
- Code invoking methods on the S3AFileSystem class
which used classes from the V1 SDK.
- The ability to define the factory for the `AmazonS3` client, and
to retrieve it from the S3AFileSystem. There is a new factory
API and a special interface S3AInternals to access a limited
set of internal classes and operations.
- Delegation token and auditing extensions.
- Classes trying to integrate with the AWS SDK.
All standard V1 credential providers listed in the option
fs.s3a.aws.credentials.provider will be automatically remapped to their
V2 equivalent.
Other V1 Credential Providers are supported, but only if the V1 SDK is
added back to the classpath.
The SDK Signing plugin has changed; all v1 signers are incompatible.
There is no support for the S3 "v2" signing algorithm.
Finally, the aws-sdk-bundle JAR has been replaced by the shaded V2
equivalent, "bundle.jar", which is now exported by the hadoop-aws module.
Consult the document aws_sdk_upgrade for the full details.
Contributed by Ahmar Suhail + some bits by Steve Loughran