de37fd37d6
Declares its compatibility with Spark's dynamic output partitioning by having the stream capability "mapreduce.job.committer.dynamic.partitioning" Requires a Spark release with SPARK-40034, which does the probing before deciding whether to accept/rejecting instantiation with dynamic partition overwrite set This feature can be declared as supported by any other PathOutputCommitter implementations whose algorithm and destination filesystem are compatible. None of the S3A committers are compatible. The classic FileOutputCommitter is, but it does not declare itself as such out of our fear of changing that code. The Spark-side code will automatically infer compatibility if the created committer is of that class or a subclass. Contributed by Steve Loughran. |
||
---|---|---|
.. | ||
hadoop-mapreduce-client-app | ||
hadoop-mapreduce-client-common | ||
hadoop-mapreduce-client-core | ||
hadoop-mapreduce-client-hs | ||
hadoop-mapreduce-client-hs-plugins | ||
hadoop-mapreduce-client-jobclient | ||
hadoop-mapreduce-client-nativetask | ||
hadoop-mapreduce-client-shuffle | ||
hadoop-mapreduce-client-uploader | ||
pom.xml |