366186d999
This is needed to fix up some confusion about caching of job.addCache() handling of S3A paths; all parent dirs -the files are downloaded by the NM without using the DTs of the user submitting the job. This means that when you submit jobs to an EC2 cluster with lower IAM permissions than the user, cached resources don't get downloaded and the job doesn't start. Production code changes: * S3AFileStatus Adds "true" to the superclass's encrypted flag during construction. Tests * Base AbstractContractOpenTest can control whether zero byte files created in tests are encrypted. Not done via an XML attribute, just a subclass point. Thoughts? * Verify that the filecache considers paths to not have the permissions which trigger reduce-privilege downloads * And extend ITestDelegatedMRJob to test a completely different bucket (open street map), to verify that cached resources do get their tokens picked up Docs: * Advise FS developers to say all files are encrypted. It's otherwise harmless and it'll stop other people seeing impossible to debug error messages on app launch. Contributed by Steve Loughran. Change-Id: Ifaae4c9d735ccc5eafeebd2584b65daf2d4e5da3 |
||
---|---|---|
.. | ||
hadoop-aliyun | ||
hadoop-archive-logs | ||
hadoop-archives | ||
hadoop-aws | ||
hadoop-azure | ||
hadoop-azure-datalake | ||
hadoop-datajoin | ||
hadoop-distcp | ||
hadoop-extras | ||
hadoop-fs2img | ||
hadoop-gridmix | ||
hadoop-kafka | ||
hadoop-openstack | ||
hadoop-pipes | ||
hadoop-resourceestimator | ||
hadoop-rumen | ||
hadoop-sls | ||
hadoop-streaming | ||
hadoop-tools-dist | ||
pom.xml |