hadoop/hadoop-hdfs-project/hadoop-hdfs-httpfs
Wei-Chiu Chuang 4ffe5eb1dd
HADOOP-17669. Backport HADOOP-17079, HADOOP-17505 to branch-3.3 (#2959)
* HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

Co-authored-by: Wei-Chiu Chuang <weichiu@apache.org>
Change-Id: I0f31409923ece24a82dfba4c4610d8a38c52d9fb

* HADOOP-17505. public interface GroupMappingServiceProvider needs default impl for getGroupsSet() (#2661). Contributed by Vinayakumar B.

(cherry picked from commit c4c0683dff)

Co-authored-by: Xiaoyu Yao <xyao@apache.org>
Co-authored-by: Vinayakumar B <vinayakumarb@apache.org>
2021-05-17 18:57:46 -07:00
..
dev-support HDFS-3513. HttpFS should cache filesystems. (tucu) 2012-08-01 23:09:19 +00:00
src HADOOP-17669. Backport HADOOP-17079, HADOOP-17505 to branch-3.3 (#2959) 2021-05-17 18:57:46 -07:00
pom.xml HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin (#2753) 2021-03-11 14:57:03 +09:00
README.txt

-----------------------------------------------------------------------------
HttpFS - Hadoop HDFS over HTTP

HttpFS is a server that provides a REST HTTP gateway to HDFS with full
filesystem read & write capabilities.

HttpFS can be used to transfer data between clusters running different
versions of Hadoop (overcoming RPC versioning issues), for example using
Hadoop DistCP.

HttpFS can be used to access data in HDFS on a cluster behind of a firewall
(the HttpFS server acts as a gateway and is the only system that is allowed
to cross the firewall into the cluster).

HttpFS can be used to access data in HDFS using HTTP utilities (such as curl
and wget) and HTTP libraries Perl from other languages than Java.
-----------------------------------------------------------------------------