* Remove redundant strings.h inclusions
* strings.h was included in a bunch of
C/C++ files and were redundant.
* Also, strings.h is not available on
Windows and thus isn't cross-platform
compatible.
* Build for all platforms in CI
* Revert "Build for all platforms in CI"
This reverts commit 2650f047bd6791a5908cfbe50cc8e70d42c512cb.
* Debug failure on Centos 8
* Skipping pipeline run on
Centos 7 to debug the
failure on Centos 8.
* Revert "Debug failure on Centos 8"
This reverts commit e365e34d6fab9df88f4df622910ddb28a8c8796f.
* hdfs_find uses u_int32_t type for
storing the value for the max-depth
command line argument.
* The type u_int32_t isn't standard,
isn't available on Windows and thus
breaks cross-platform compatibility.
We need to replace this with uint32_t
which is available on all platforms since
it's part of the C++ standard.
This moves off use of the purged s3a://landsat-pds bucket, so fixing tests
which had started failing.
* Adds a new class, PublicDatasetTestUtils to manage the use of public datasets.
* The new test bucket s3a://usgs-landsat/ is requester pays, so depends upon
HADOOP-14661.
Consult the updated test documentation when running against other S3 stores.
Contributed by Daniel Carl Jones
Change-Id: Ie8585e4d9b67667f8cb80b2970225d79a4f8d257
This moves off use of the purged s3a://landsat-pds bucket, so fixing tests
which had started failing.
* Adds a new class, PublicDatasetTestUtils to manage the use of public datasets.
* The new test bucket s3a://usgs-landsat/ is requester pays, so depends upon
HADOOP-14661.
Consult the updated test documentation when running against other S3 stores.
Contributed by Daniel Carl Jones
This allows for builds to be run with options like
--mvnargs="-Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false"
Contributed by Ayush Saxena.
Change-Id: I396e82d0915d679657d063a948f865041bcdde29
* Some C/C++ files use ssize_t data type.
This isn't available for Windows and we
need to define an alias for this and set it
to an appropriate type to make it cross
platform compatible.
Stops the abfs connector warning if openFile().withFileStatus()
is invoked with a FileStatus is not an abfs VersionedFileStatus.
Contributed by Steve Loughran.
Change-Id: I85076b365eb30aaef2ed35139fa8714efd4d048e
S3A input stream support for the few fs.option.openfile settings.
As well as supporting the read policy option and values,
if the file length is declared in fs.option.openfile.length
then no HEAD request will be issued when opening a file.
This can cut a few tens of milliseconds off the operation.
The patch adds a new openfile parameter/FS configuration option
fs.s3a.input.async.drain.threshold (default: 16000).
It declares the number of bytes remaining in the http input stream
above which any operation to read and discard the rest of the stream,
"draining", is executed asynchronously.
This asynchronous draining offers some performance benefit on seek-heavy
file IO.
Contributed by Steve Loughran.
Change-Id: I9b0626bbe635e9fd97ac0f463f5e7167e0111e39
These changes ensure that sequential files are opened with the
right read policy, and split start/end is passed in.
As well as offering opportunities for filesystem clients to
choose fetch/cache/seek policies, the settings ensure that
processing text files on an s3 bucket where the default policy
is "random" will still be processed efficiently.
This commit depends on the associated hadoop-common patch,
which must be committed first.
Contributed by Steve Loughran.
Change-Id: Ic6713fd752441cf42ebe8739d05c2293a5db9f94
This defines standard option and values for the
openFile() builder API for opening a file:
fs.option.openfile.read.policy
A list of the desired read policy, in preferred order.
standard values are
adaptive, default, random, sequential, vector, whole-file
fs.option.openfile.length
How long the file is.
fs.option.openfile.split.start
start of a task's split
fs.option.openfile.split.end
end of a task's split
These can be used by filesystem connectors to optimize their
reading of the source file, including but not limited to
* skipping existence/length probes when opening a file
* choosing a policy for prefetching/caching data
The hadoop shell commands which read files all declare "whole-file"
and "sequential", as appropriate.
Contributed by Steve Loughran.
Change-Id: Ia290f79ea7973ce8713d4f90f1315b24d7a23da1