hadoop/hadoop-ozone/dist/src/main/compose
2018-10-12 16:27:54 -07:00
..
ozone HDDS-447. Separate ozone-dist and hadoop-dist projects with real classpath separation. Contributed by Elek Marton. 2018-09-24 10:10:11 -07:00
ozone-hdfs HDDS-447. Separate ozone-dist and hadoop-dist projects with real classpath separation. Contributed by Elek Marton. 2018-09-24 10:10:11 -07:00
ozonefs HDDS-641. Fix ozone filesystem robot test. Contributed by Mukul Kumar Singh. 2018-10-12 11:30:57 -07:00
ozoneperf HDDS-641. Fix ozone filesystem robot test. Contributed by Mukul Kumar Singh. 2018-10-12 11:30:57 -07:00
ozones3 HDDS-445. Create a logger to print out all of the incoming requests. 2018-10-12 16:27:54 -07:00
ozonescripts HDDS-641. Fix ozone filesystem robot test. Contributed by Mukul Kumar Singh. 2018-10-12 11:30:57 -07:00
README.md HDDS-447. Separate ozone-dist and hadoop-dist projects with real classpath separation. Contributed by Elek Marton. 2018-09-24 10:10:11 -07:00

Docker cluster definitions

This directory contains multiple docker cluster definitions to start local pseudo cluster with different configuration.

It helps to start local (multi-node like) pseudo cluster with docker and docker-compose and obviously it's not for production.

You may find more information in the specific subdirectories but in generic you can use the following commands:

Usage

To start a cluster go to a subdirectory and start the cluster:

docker-compose up -d

You can check the logs of all the components with:

docker-compose logs

In case of a problem you can destroy the cluster an delete all the local state with:

docker-compose down

(Note: a simple docker-compose stop may not delete all the local data).

You can scale up and down the components:

docker-compose scale datanode=5

Usually the key webui ports are published on the docker host.