Commit Graph

1949 Commits

Author SHA1 Message Date
Yuval Degani
dd0834696a HADOOP-16581. Revise ValueQueue to correctly replenish queues that go below the watermark (#1463)
In the existing implementation, the ValueQueue::getAtMost() method will only trigger a refill on a key queue if it has gone empty, instead of triggering a refill when it has gone below the watermark. Revise the test suite to correctly verify this behavior.
2019-09-20 09:55:48 -07:00
Vinayakumar B
1654497f98
HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1 (#1432)
HADOOP-16557. [pb-upgrade] Upgrade protobuf.version to 3.7.1. Contributed by Vinayakumar B.
2019-09-20 16:08:30 +05:30
Kihwal Lee
d4205dce17 HADOOP-16582. LocalFileSystem's mkdirs() does not work as expected under viewfs. Contributed by Kihwal Lee 2019-09-19 08:23:35 -05:00
Sahil Takiar
55ce454ce4
HADOOP-16371: Option to disable GCM for SSL connections when running on Java 8.
Contributed by Sahil Takiar.

This moves the SSLSocketFactoryEx class from hadoop-azure into hadoop-common
as the DelegatingSSLSocketFactory and binds the S3A connector to it so that
it can avoid using those HTTPS algorithms which are underperformant on Java 8.

Change-Id: Ie9e6ac24deac1aa05e136e08899620efa7d22abd
2019-09-17 11:32:03 +01:00
Steve Loughran
9221704f85
HADOOP-16490. Avoid/handle cached 404s during S3A file creation.
Contributed by Steve Loughran.

This patch avoids issuing any HEAD path request when creating a file with overwrite=true,
so 404s will not end up in the S3 load balancers unless someone calls getFileStatus/exists/isFile
in their own code.

The Hadoop FsShell CommandWithDestination class is modified to not register uncreated files
for deleteOnExit(), because that calls exists() and so can place the 404 in the cache, even
after S3A is patched to not do it itself.

Because S3Guard knows when a file should be present, it adds a special FileNotFound retry policy
independently configurable from other retry policies; it is also exponential, but with
different parameters. This is because every HEAD request will refresh any 404 cached in
the S3 Load Balancers. It's not enough to retry: we have to have a suitable gap between
attempts to (hopefully) ensure any cached entry wil be gone.

The options and values are:

fs.s3a.s3guard.consistency.retry.interval: 2s
fs.s3a.s3guard.consistency.retry.limit: 7

The S3A copy() method used during rename() raises a RemoteFileChangedException which is not caught
so not downgraded to false. Thus: when a rename is unrecoverable, this fact is propagated.

Copy operations without S3Guard lack the confidence that the file exists, so don't retry the same way:
it will fail fast with a different error message. However, because create(path, overwrite=false) no
longer does HEAD path, we can at least be confident that S3A itself is not creating those cached
404 markers.

Change-Id: Ia7807faad8b9a8546836cb19f816cccf17cca26d
2019-09-11 16:46:25 +01:00
Jungtaek Lim (HeartSaVioR)
bb0b922a71
HADOOP-16255. Add ChecksumFs.rename(path, path, boolean)
Contributed by Jungtaek Lim

Change-Id: If00a4d7d30456c08eb2b0f7e2b242197bc4ee05d
2019-09-06 21:53:00 +01:00
Erik Krogen
c92a3e94d8 HADOOP-15565. Add an inner FS cache to ViewFileSystem, separate from the global cache, to avoid file system leaks. Contributed by Jinglun. 2019-09-06 10:22:28 -07:00
Steve Loughran
511df1e837 HADOOP-16430. S3AFilesystem.delete to incrementally update s3guard with deletions
Contributed by Steve Loughran.

This overlaps the scanning for directory entries with batched calls to S3 DELETE and updates of the S3Guard tables.
It also uses S3Guard to list the files to delete, so find newly created files even when S3 listings are not use consistent.

For path which the client considers S3Guard to be authoritative, we also do a recursive LIST of the store and delete files; this is to find unindexed files and do guarantee that the delete(path, true) call really does delete everything underneath.

Change-Id: Ice2f6e940c506e0b3a78fa534a99721b1698708e
2019-09-05 14:25:15 +01:00
Erik Krogen
337e9b794d HADOOP-16268. Allow StandbyException to be thrown as CallQueueOverflowException when RPC call queue is filled. Contributed by CR Hota. 2019-09-04 08:22:02 -07:00
Surendra Singh Lilhore
29bd6f3fc3 HDFS-8631. WebHDFS : Support setQuota. Contributed by Chao Sun. 2019-08-28 23:58:23 +05:30
Erik Krogen
e356e4f4b7 HADOOP-16391 Add a prefix to the metric names for MutableRatesWithAggregation used for deferred RPC metrics to avoid collision with non-deferred metrics. Contributed by Bilwa S T. 2019-08-16 09:01:44 -07:00
Akira Ajisaka
0f8add8a60
HADOOP-16495. Fix invalid metric types in PrometheusMetricsSink (#1244) 2019-08-14 12:24:03 +09:00
Inigo Goiri
6b4564f1d5 HADOOP-16453. Update how exceptions are handled in NetUtils. Contributed by Lisheng Sun. 2019-08-11 20:34:36 -07:00
Eric Yang
22430c10e2 HADOOP-16457. Fixed Kerberos activation in ServiceAuthorizationManager.
Contributed by Prabhu Joseph
2019-08-06 17:04:17 -04:00
Jianfei Jiang
71aad60e51
HDFS-14691. Wrong usage hint for hadoop fs command "test".
Contributed by Jianfei Jiang.

Change-Id: I9f5e89721ff210641375fbf42a70043f0d74458e
2019-08-05 13:08:47 +01:00
Wei-Chiu Chuang
d086d058d8 HDFS-14652. HealthMonitor connection retry times should be configurable. Contributed by Chen Zhang. 2019-08-01 16:13:10 -07:00
Akira Ajisaka
8bda91d20a
HADOOP-16398. Exports Hadoop metrics to Prometheus (#1170) 2019-07-31 10:11:36 -07:00
Akira Ajisaka
cbfa3f3e98
HADOOP-16435. RpcMetrics should not retained forever. Contributed by Zoltan Haindrich. 2019-07-29 17:37:26 -07:00
Steve Loughran
4317d33232
HADOOP-16380. S3Guard to determine empty directory status for all non-root directories.
Contributed by Steve Loughran and Gabor Bota.

This
* Asks S3Guard to determine the empty directory status.
* Has S3A's root directory rm("/") command to always return false (as abfs does)
* Documents that object stores MAY do this
* Overloads ContractTestUtils.assertDeleted to let assertions declare that the source directory does not need to exist. This stops inconsistencies in directory listings failing a root test.

It avoids a recent regression (HADOOP-16279) where if there was a tombstone above the first element found in a directory listing, the directory would be considered empty, when in fact there were child entries. That could downgrade an rm(path, recursive) to a no-op, while also confusing rename(src, dest), as dest could be mistaken for an empty directory and so permit the copy above it, rather than reject it "destination path exists and is not empty".

Change-Id: I136a3d1a5a48a67e6155d790a40ff558d0d2c108
2019-07-23 14:52:03 +01:00
Gopal V
b4466a3b0a
HADOOP-16341. ShutDownHookManager: Regressed performance on Hook removals after HADOOP-15679
Contributed by Gopal V and Atilla Magyar.

Change-Id: I066d5eece332a1673594de0f9b484443f95530ec
2019-07-17 13:50:02 +01:00
Steve Loughran
b15ef7dc3d
HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3.
Contributed by Steve Loughran

Contains

- HADOOP-16397. Hadoop S3Guard Prune command to support a -tombstone option.
- HADOOP-16406. ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

This patch doesn't fix the underlying problem but it

* changes some tests to clean up better
* does a lot more in logging operations in against DDB, if enabled
* adds an entry point to dump the state of the metastore and s3 tables (precursor to fsck)
* adds a purge entry point to help clean up after a test run has got a store into a mess
* s3guard prune command adds -tombstone option to only clear tombstones

The outcome is that tests should pass consistently and if problems occur we have better diagnostics.

Change-Id: I3eca3f5529d7f6fec398c0ff0472919f08f054eb
2019-07-12 13:02:25 +01:00
Christopher Gregorian
129576f628 HDFS-14403. Cost-based extension to the RPC Fair Call Queue. Contributed by Christopher Gregorian. 2019-06-24 12:09:17 -07:00
Ayush Saxena
b52fd05d42 HDFS-13404. Addendum: RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed by Takanobu Asanuma. 2019-06-24 22:03:04 +05:30
Takanobu Asanuma
559cb11551 HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails. 2019-06-24 22:03:03 +05:30
Steve Loughran
e02eb24e0a
HADOOP-15183. S3Guard store becomes inconsistent after partial failure of rename.
Contributed by Steve Loughran.

Change-Id: I825b0bc36be960475d2d259b1cdab45ae1bb78eb
2019-06-20 09:56:40 +01:00
Wei-Chiu Chuang
1e92db5a1e HDFS-11949. Add testcase for ensuring that FsShell cann't move file to the target directory that file exists. Contributed by legend. 2019-06-17 18:29:34 -07:00
Eric Yang
4ea6c2f457 HADOOP-16354. Enable AuthFilter as default for WebHDFS.
Contributed by Prabhu Joseph
2019-06-11 18:41:08 -04:00
Eric Yang
294695dd57 HADOOP-16314. Make sure all web end points are covered by the same authentication filter.
Contributed by Prabhu Joseph
2019-06-05 18:55:13 -04:00
Sammi Chen
d1aad44490 HDFS-14356. Implement HDFS cache on SCM with native PMDK libs. Contributed by Feilong He. 2019-06-05 21:33:00 +08:00
Steve Loughran
309501c6fa
Revert "HADOOP-16050: s3a SSL connections should use OpenSSL"
This reverts commit b067f8acaa.

Change-Id: I584b050a56c0e6f70b11fa3f7db00d5ac46e7dd8
2019-06-05 13:54:55 +01:00
Steve Loughran
7724d8031b Revert "HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors"
This reverts commit 5906268f0d.
2019-06-05 13:54:42 +01:00
Shweta Yakkali
6f5a36c13c HADOOP-13656. fs -expunge to take a filesystem. Contributed by Shweta.
Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>
2019-05-30 13:21:58 -07:00
Christopher Gregorian
f96a2df38d HADOOP-16266. Add more fine-grained processing time metrics to the RPC layer. Contributed by Christopher Gregorian. 2019-05-23 10:28:37 -07:00
Eric Yang
ea0b1d8fba HADOOP-16287. Implement ProxyUserAuthenticationFilter for web protocol impersonation.
Contributed by Prabhu Joseph
2019-05-23 11:36:32 -04:00
Akira Ajisaka
a771e2a638
HADOOP-12948. Remove the defunct startKdc profile from hadoop-common. Contributed by Wei-Chiu Chuang. 2019-05-23 13:59:42 +09:00
Sahil Takiar
5906268f0d HADOOP-16321: ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors 2019-05-21 11:30:45 -06:00
Sahil Takiar
b067f8acaa HADOOP-16050: s3a SSL connections should use OpenSSL
(cherry picked from commit aebf229c175dfa19fff3b31e9e67596f6c6124fa)
2019-05-16 08:57:54 -06:00
Bharat Viswanadham
d4c8858586
HADOOP-16247. NPE in FsUrlConnection. Contributed by Karthik Palanisamy. 2019-05-15 17:41:36 -07:00
Akira Ajisaka
f257497b0f HADOOP-16299. [JDK 11] Build fails without specifying -Djavac.version=11
Signed-off-by: Takanobu Asanuma <tasanuma@apache.org>
2019-05-09 14:49:46 +09:00
Giovanni Matteo Fumarola
7a3188d054 HADOOP-16282. Avoid FileStream to improve performance. Contributed by Ayush Saxena. 2019-05-02 12:58:42 -07:00
Sahil Takiar
4877f0aa51 HDFS-3246: pRead equivalent for direct read path (#597)
HDFS-3246: pRead equivalent for direct read path

Contributed by Sahil Takiar
2019-04-30 14:52:16 -07:00
Sean Mackrory
a703dae25e HADOOP-16222. Fix new deprecations after guava 27.0 update in trunk. Contributed by Gabor Bota. 2019-04-24 10:39:00 -06:00
Inigo Goiri
fb1c549139 HDFS-14374. Expose total number of delegation tokens in AbstractDelegationTokenSecretManager. Contributed by CR Hota. 2019-04-22 13:32:08 -07:00
Erik Krogen
1ddb48872f HADOOP-16265. Fix bug causing Configuration#getTimeDuration to use incorrect units when the default value is used. Contributed by starphin. 2019-04-22 08:16:57 -07:00
Sahil Takiar
2382f63fc0
HADOOP-14747. S3AInputStream to implement CanUnbuffer.
Author:    Sahil Takiar <stakiar@cloudera.com>
2019-04-12 18:12:02 -07:00
Inigo Goiri
7b5b783f66 HDFS-14327. Using FQDN instead of IP to access servers with DNS resolving. Contributed by Fengnan Li. 2019-04-03 16:11:13 -07:00
Steve Loughran
366186d999
HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true (#685)
This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without  using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start.

Production code changes:
* S3AFileStatus Adds "true" to the superclass's encrypted flag during construction.

Tests
* Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts?
* Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads
* And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up

Docs:
* Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch.

Contributed by Steve Loughran.

Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3
2019-04-03 21:23:40 +01:00
Akira Ajisaka
aaaf856f4b
HADOOP-16226. new Path(String str) does not remove all the trailing slashes of str 2019-04-03 13:16:59 +09:00
Lokesh Jain
cf268114c9 HDFS-13960. hdfs dfs -checksum command should optionally show block size in output. Contributed by Lokesh Jain.
Signed-off-by: Wei-Chiu Chuang <weichiu@apache.org>
2019-04-02 12:24:55 -07:00
Xiaoyu Yao
f41f938b2e
HADOOP-16199. KMSLoadBlanceClientProvider does not select token correctly. Contributed by Xiaoyu Yao.
This closes  #642.
2019-03-28 21:55:31 -07:00