To avoid the ABFS instance getting closed due to GC while the streams are working, attach the ABFS instance to a backReference opaque object and passing down to the streams so that we have a hard reference while the streams are working.
Contributed by: Mehakmeet Singh
* Add jdiff xml files from 3.3.6 release.
* Declare 3.3.6 as the latest stable release.
* Copy release notes.
(cherry picked from commit 7db9895000)
(cherry picked from commit cc121e2124aa01458dc296a060edc5e21a295268)
This modifies the manifest committer so that the list of files
to rename is passed between stages as a file of
writeable entries on the local filesystem.
The map of directories to create is still passed in memory;
this map is built across all tasks, so even if many tasks
created files, if they all write into the same set of directories
the memory needed is O(directories) with the
task count not a factor.
The _SUCCESS file reports on heap size through gauges.
This should give a warning if there are problems.
Contributed by Steve Loughran
This:
1. Adds optLong, optDouble, mustLong and mustDouble
methods to the FSBuilder interface to let callers explicitly
passin long and double arguments.
2. The opt() and must() builder calls which take float/double values
now only set long values instead, so as to avoid problems
related to overloaded methods resulting in a ".0" being appended
to a long value.
3. All of the relevant opt/must calls in the hadoop codebase move to
the new methods
4. And the s3a code is resilient to parse errors in is numeric options
-it will downgrade to the default.
This is nominally incompatible, but the floating-point builder methods
were never used: nothing currently expects floating point numbers.
For anyone who wants to safely set numeric builder options across all compatible
releases, convert the number to a string and then use the opt(String, String)
and must(String, String) methods.
Contributed by Steve Loughran
The HDFS lease APIs have been replicated as interfaces in hadoop-common so other filesystems can
also implement them. Applications which use the leasing APIs should migrate to the new
interface where possible.
Contributed by Stephen Wu
* HADOOP-18587. Fixing jettison vulnerability of hadoop-common lib
* no need for excluding, let it come
Change-Id: Ia6e4ad351158dd4b0510dec34bbde531a60e7654
The log level can only be set on Log4J log implementations;
probes are used to downgrade to a warning when other
logging back ends are used
Contributed by Viraj Jasani
Even though DiskChecker.mkdirsWithExistsCheck() will create the directory tree,
it is only called *after* the enumeration of directories with available
space has completed.
Directories which don't exist are reported as having 0 space, therefore
the mkdirs code is never reached.
Adding a simple mkdirs() -without bothering to check the outcome-
ensures that if a dir has been deleted then it will be reconstructed
if possible. If it can't it will still have 0 bytes of space
reported and so be excluded from the allocation.
Contributed by Steve Loughran
Expands on the comments in cluster config to tell people
they shouldn't be running a cluster without a private VLAN
in cloud, that Knox is good here, and unsecured clusters
without a VLAN are just computation-as-a-service to crypto miners
Contributed by Steve Loughran
Changes method name of RPC.Builder#setnumReaders to setNumReaders()
The original method is still there, just marked deprecated.
It is the one which should be used when working with older branches.
Contributed by Haiyang Hu
When closing we need to wrap the flush() in a try .. finally, otherwise
when flush throws it will stop completion of the remainder of the
close activities and in particular the close of the underlying wrapped
stream object resulting in a resource leak.
Contributed by Colm Dougan
Contributed by Viraj Jasani <vjasani@apache.org>
Signed-off-by: Chris Nauroth <cnauroth@apache.org>
Signed-off-by: Steve Loughran <stevel@apache.org>
Signed-off-by: Mingliang Liu <liuml07@apache.org>
Part of HADOOP-18469 and the hardening of XML/XSL parsers.
Followup to the main HADOOP-18575 patch, to improve performance when
working with xml/xsl engines which don't support the relevant attributes.
Include this change when backporting.
Contributed by PJ Fanning.
The kerberos RPC does not declare any restriction on
characters used in kerberos names, though
implementations MAY be more restrictive.
If the kerberos controller supports use non-conventional
principal names *and the kerberos admin chooses to use them*
this can confuse some of the parsing.
The obvious solution is for the enterprise admins to "not do that"
as a lot of things break, bits of hadoop included.
Harden the hadoop code slightly so at least we fail more gracefully,
so people can then get in touch with their sysadmin and tell them
to stop it.
This has triggered an OOM in a process which was churning through s3a fs
instances; the increased memory footprint of IOStatistics amplified what
must have been a long-standing issue with FS instances being created
and not closed()
* Makes sure instrumentation is closed when the FS is closed.
* Uses a weak reference from metrics to instrumentation, so even
if the FS wasn't closed (see HADOOP-18478), this back reference
would not cause the S3AInstrumentation reference to be retained.
* If S3AFileSystem is configured to log at TRACE it will log the
calling stack of initialize(), so help identify where the
instance is being created. This should help track down
the cause of instance leakage.
Contributed by Steve Loughran.
This addresses HADOOP-18521, "ABFS ReadBufferManager buffer sharing
across concurrent HTTP requests" by not trying to cancel
in progress reads.
It supercedes HADOOP-18528, which disables the prefetching.
If that patch is applied *after* this one, prefetching
will be disabled.
As well as changing the default value in the code,
core-default.xml is updated to set
fs.azure.enable.readahead = true
As a result, if Configuration.get("fs.azure.enable.readahead")
returns a non-null value, then it can be inferred that
it was set in or core-default.xml (the fix is present)
or in core-site.xml (someone asked for it).
Contributed by Pranav Saxena.
* Exactly 1 sending thread per an RPC connection.
* If the calling thread is interrupted before the socket write, it will be skipped instead of sending it anyways.
* If the calling thread is interrupted during the socket write, the write will finish.
* RPC requests will be written to the socket in the order received.
* Sending thread is only started by the receiving thread.
* The sending thread periodically checks the shouldCloseConnection flag.
Disables block prefetching on ABFS InputStreams, by setting
fs.azure.enable.readahead to false in core-default.xml and
the matching java constant.
This prevents
HADOOP-18521. ABFS ReadBufferManager buffer sharing across concurrent HTTP requests.
Once a fix for that is committed, this change can be reverted.
Contributed by Mehakmeet Singh.
Updates okhttp3 and okio so their transitive dependency on Kotlin
stdlib is free from recent CVEs.
okhttp3:okhttp => 4.10.0
okio:okio => 3.2.0
kotlin stdlib => 1.6.20
kotlin CVEs fixed:
CVE-2022-24329
CVE-2020-29582
Contributed by PJ Fanning.
Move construction of XML parsers in YARN
modules to using the locked-down parser factory
of HADOOP-18469.
One exception: GpuDeviceInformationParser still supports DTD resolution;
all other features are disabled.
Contributed by P J Fanning
Add to XMLUtils a set of methods to create secure XML Parsers/transformers, locking down DTD, schema, XXE exposure.
Use these wherever XML parsers are created.
Contributed by PJ Fanning
Make S3APrefetchingInputStream.seek() completely lazy. Calls to seek() will not affect the current buffer nor interfere with prefetching, until read() is called.
This change allows various usage patterns to benefit from prefetching, e.g. when calling readFully(position, buffer) in a loop for contiguous positions the intermediate internal calls to seek() will be noops and prefetching will have the same performance as in a sequential read.
Contributed by Alessandro Passaro.
part of HADOOP-18103.
Also introducing a config fs.s3a.vectored.active.ranged.reads
to configure the maximum number of number of range reads a
single input stream can have active (downloading, or queued)
to the central FileSystem instance's pool of queued operations.
This stops a single stream overloading the shared thread pool.
Contributed by: Mukund Thakur
This problem surfaced in impala integration tests
IMPALA-11592. TestLocalCatalogRetries.test_fetch_metadata_retry fails in S3 build
after the change
HADOOP-17461. Add thread-level IOStatistics Context
The actual GC race condition came with
HADOOP-18091. S3A auditing leaks memory through ThreadLocal references
The fix for this is, if our hypothesis is correct, in WeakReferenceMap.create()
where a strong reference to the new value is kept in a local variable
*and referred to later* so that the JVM will not GC it.
Along with the fix, extra assertions ensure that if the problem is not fixed,
applications will fail faster/more meaningfully.
Contributed by Steve Loughran.
part of HADOOP-18103.
While merging the ranges in CheckSumFs, they are rounded up based on the
value of checksum bytes size which leads to some ranges crossing the EOF
thus they need to be fixed else it will cause EOFException during actual reads.
Contributed By: Mukund Thakur
* This PR adds an option
use.platformToolsetVersion that
makes the build systems to use
this platform toolset version.
* This also makes sure that
win-vs-upgrade.cmd does not get
executed when the
use.platformToolsetVersion
option is specified.
Avoid reconnecting to the old address after detecting that the address has been updated.
* Fix Checkstyle line length violation
* Keep ConnectionId as Immutable for map key
The ConnectionId is used as a key in the connections map, and updating the remoteId caused problems with the cleanup of connections when the removeMethod was used.
Instead of updating the address within the remoteId, use the removeMethod to cleanup references to the current identifier and then replace it with a new identifier using the updated address.
* Use final to protect immutable ConnectionId
Mark non-test fields as private and final, and add a missing accessor.
* Use a stable hashCode to allow safe IP addr changes
* Add test that updated address is used
Once the address has been updated, it should be used in future calls. Check to ensure that a second request succeeds and that it uses the existing updated address instead of having to re-resolve.
Signed-off-by: Nick Dimiduk <ndimiduk@apache.org>
Signed-off-by: sokui
Signed-off-by: XanderZu
Signed-off-by: stack <stack@apache.org>
This is the the preview release of the HADOOP-18028 S3A performance input stream.
It is still stabilizing, but ready to test.
Contains
HADOOP-18028. High performance S3A input stream (#4109)
Contributed by Bhalchandra Pandit.
HADOOP-18180. Replace use of twitter util-core with java futures (#4115)
Contributed by PJ Fanning.
HADOOP-18177. Document prefetching architecture. (#4205)
Contributed by Ahmar Suhail
HADOOP-18175. fix test failures with prefetching s3a input stream (#4212)
Contributed by Monthon Klongklaew
HADOOP-18231. S3A prefetching: fix failing tests & drain stream async. (#4386)
* adds in new test for prefetching input stream
* creates streamStats before opening stream
* updates numBlocks calculation method
* fixes ITestS3AOpenCost.testOpenFileLongerLength
* drains stream async
* fixes failing unit test
Contributed by Ahmar Suhail
HADOOP-18254. Disable S3A prefetching by default. (#4469)
Contributed by Ahmar Suhail
HADOOP-18190. Collect IOStatistics during S3A prefetching (#4458)
This adds iOStatisticsConnection to the S3PrefetchingInputStream class, with
new statistic names in StreamStatistics.
This stream is not (yet) IOStatisticsContext aware.
Contributed by Ahmar Suhail
HADOOP-18379 rebase feature/HADOOP-18028-s3a-prefetch to trunk
HADOOP-18187. Convert s3a prefetching to use JavaDoc for fields and enums.
HADOOP-18318. Update class names to be clear they belong to S3A prefetching
Contributed by Steve Loughran
The name of the option to enable/disable thread level statistics is
"fs.iostatistics.thread.level.enabled";
There is also an enabled() probe in IOStatisticsContext which can
be used to see if the thread level statistics is active.
Contributed by Viraj Jasani
This adds a thread-level collector of IOStatistics, IOStatisticsContext,
which can be:
* Retrieved for a thread and cached for access from other
threads.
* reset() to record new statistics.
* Queried for live statistics through the
IOStatisticsSource.getIOStatistics() method.
* Queries for a statistics aggregator for use in instrumented
classes.
* Asked to create a serializable copy in snapshot()
The goal is to make it possible for applications with multiple
threads performing different work items simultaneously
to be able to collect statistics on the individual threads,
and so generate aggregate reports on the total work performed
for a specific job, query or similar unit of work.
Some changes in IOStatistics-gathering classes are needed for
this feature
* Caching the active context's aggregator in the object's
constructor
* Updating it in close()
Slightly more work is needed in multithreaded code,
such as the S3A committers, which collect statistics across
all threads used in task and job commit operations.
Currently the IOStatisticsContext-aware classes are:
* The S3A input stream, output stream and list iterators.
* RawLocalFileSystem's input and output streams.
* The S3A committers.
* The TaskPool class in hadoop-common, which propagates
the active context into scheduled worker threads.
Collection of statistics in the IOStatisticsContext
is disabled process-wide by default until the feature
is considered stable.
To enable the collection, set the option
fs.thread.level.iostatistics.enabled
to "true" in core-site.xml;
Contributed by Mehakmeet Singh and Steve Loughran
Reduce the ExitUtil synchronized block scopes so System.exit
and Runtime.halt calls aren't within their boundaries,
so ExitUtil wrappers do not block each other.
Enlarged catches to all Throwables (not just Exceptions).
Contributed by Remi Catherinot
* HADOOP-18321.Fix when to read an additional record from a BZip2 text file split
Co-authored-by: Ashutosh Gupta <ashugpt@amazon.com> and Reviewed by Akira Ajisaka.
Update the dependencies of the LDAP libraries used for testing:
ldap-api.version = 2.0.0
apacheds.version = 2.0.0.AM26
Contributed by Colm O hEigeartaigh.
part of HADOOP-18103.
Handling memory fragmentation in S3A vectored IO implementation by
allocating smaller user range requested size buffers and directly
filling them from the remote S3 stream and skipping undesired
data in between ranges.
This patch also adds aborting active vectored reads when stream is
closed or unbuffer() is called.
Contributed By: Mukund Thakur
part of HADOOP-18103.
Required for vectored IO feature. None of current buffer pool
implementation is complete. ElasticByteBufferPool doesn't use
weak references and could lead to memory leak errors and
DirectBufferPool doesn't support caller preferences of direct
and heap buffers and has only fixed length buffer implementation.
Contributed By: Mukund Thakur
Part of HADOOP-18103.
Introducing fs.s3a.vectored.read.min.seek.size and fs.s3a.vectored.read.max.merged.size
to configure min seek and max read during a vectored IO operation in S3A connector.
These properties actually define how the ranges will be merged. To completely
disable merging set fs.s3a.max.readsize.vectored.read to 0.
Contributed By: Mukund Thakur
part of HADOOP-18103.
Add support for multiple ranged vectored read api in PositionedReadable.
The default iterates through the ranges to read each synchronously,
but the intent is that FSDataInputStream subclasses can make more
efficient readers especially in object stores implementation.
Also added implementation in S3A where smaller ranges are merged and
sliced byte buffers are returned to the readers. All the merged ranged are
fetched from S3 asynchronously.
Contributed By: Owen O'Malley and Mukund Thakur
Regression caused by HDFS-16563; the hdfs exception text was changed, but because it was
a YARN test doing the check, Yetus didn't notice.
Contributed by zhengchenyu
Speed up the magic committer with key changes being
* Writes under __magic always retain directory markers
* File creation under __magic skips all overwrite checks,
including the LIST call intended to stop files being
created over dirs.
* mkdirs under __magic probes the path for existence
but does not look any further.
Extra parallelism in task and job commit directory scanning
Use of createFile and openFile with parameters which all for
HEAD checks to be skipped.
The committer can write the summary _SUCCESS file to the path
`fs.s3a.committer.summary.report.directory`, which can be in a
different file system/bucket if desired, using the job id as
the filename.
Also: HADOOP-15460. S3A FS to add `fs.s3a.create.performance`
Application code can set the createFile() option
fs.s3a.create.performance to true to disable the same
safety checks when writing under magic directories.
Use with care.
The createFile option prefix `fs.s3a.create.header.`
can be used to add custom headers to S3 objects when
created.
Contributed by Steve Loughran.