hadoop/hadoop-ozone/dist
2019-01-24 20:24:19 +01:00
..
dev-support/bin HDDS-684. Fix HDDS-4 branch after HDDS-490 and HADOOP-15832. Contributed by Xiaoyu Yao. 2019-01-15 23:01:03 -07:00
src/main HDDS-948. MultipartUpload: S3 API for Abort Multipart Upload. Contributed by Bharat Viswanadham. 2019-01-24 20:24:19 +01:00
Dockerfile HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00
pom.xml HDDS-832. Docs folder is missing from the Ozone distribution package. Contributed by Elek, Marton. 2018-11-15 11:08:48 +01:00
README.md HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00
skaffold.yaml HDDS-872. Add Dockerfile and skaffold config to deploy ozone dev build to k8s. Contributed by Elek, Marton. 2018-12-11 16:31:33 +01:00

Ozone Distribution

This folder contains the project to create the binary ozone distribution and provide all the helper script and docker files to start it locally or in the cluster.

Testing with local docker based cluster

After a full dist build you can find multiple docker-compose based cluster definition in the target/ozone-*/compose folder.

Please check the README files there.

Usually you can start the cluster with:

cd compose/ozone
docker-compose up -d

Testing on Kubernetes

You can also test the ozone cluster in kubernetes. If you have no active kubernetes cluster you can start a local one with minikube:

minikube start

For testing in kubernetes you need to:

  1. Create a docker image with the new build
  2. Upload it to a docker registery
  3. Deploy the cluster with apply kubernetes resources

The easiest way to do all these steps is using the skaffold tool. After the installation of skaffold, you can execute

skaffold run

in this (hadoop-ozone/dist) folder.

The default kubernetes resources set (src/main/k8s/) contains NodePort based service definitions for the Ozone Manager, Storage Container Manager and the S3 gateway.

With minikube you can access the services with:

minikube service s3g-public
minikube service om-public
minikube service scm-public

Monitoring

Apache Hadoop Ozone supports Prometheus out-of the box. It contains a prometheus compatible exporter servlet. To start the monitoring you need a prometheus deploy in your kubernetes cluster:

cd src/main/k8s/prometheus
kubectl apply -f .

The prometheus ui also could be access via a NodePort service:

minikube service prometheus-public

Notes on the Kubernetes setup

Please not that the provided kubernetes resources are not suitable production:

  1. There are no security setup
  2. The datanode is started in StatefulSet instead of DaemonSet (To make it possible to scale it up on one node minikube cluster)
  3. All the UI pages are published with NodePort services