hadoop/hadoop-ozone/dist/src/main/compose
2019-05-07 17:48:10 +02:00
..
common HDDS-914. Add Grafana support to ozoneperf docker container. Contributed by Dinesh Chitlangia. 2018-12-18 09:59:13 -08:00
ozone HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
ozone-hdfs HDDS-999. Make the DNS resolution in OzoneManager more resilient 2019-04-26 12:38:24 +02:00
ozone-om-ha HDDS-999. Make the DNS resolution in OzoneManager more resilient 2019-04-26 12:38:24 +02:00
ozone-recon HDDS-999. Make the DNS resolution in OzoneManager more resilient 2019-04-26 12:38:24 +02:00
ozoneblockade HDDS-1425. Ozone compose files are not compatible with the latest docker-compose. (#727) 2019-04-12 09:45:36 -07:00
ozonefs HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
ozoneperf HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
ozones3 HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
ozonescripts HDDS-665. Add hdds.datanode.dir to docker-config. Contributed by Bharat Viswanadham. 2018-10-16 15:29:53 -07:00
ozonesecure HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
ozonesecure-mr HDDS-1455. Inconsistent naming convention with Ozone Kerberos configuration. Contributed by Xiaoyu Yao. (#762) 2019-04-29 11:18:11 -07:00
ozonetrace HDDS-999. Make the DNS resolution in OzoneManager more resilient 2019-04-26 12:38:24 +02:00
README.md HDDS-1215. Change hadoop-runner and apache/hadoop base image to use Java8. Contributed by Xiaoyu Yao. 2019-03-19 09:55:52 -07:00
test-all.sh HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
test-single.sh HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00
testlib.sh HDDS-1424. Support multi-container robot test execution 2019-05-07 17:48:10 +02:00

Docker cluster definitions

This directory contains multiple docker cluster definitions to start local pseudo cluster with different configuration.

It helps to start local (multi-node like) pseudo cluster with docker and docker-compose and obviously it's not for production.

You may find more information in the specific subdirectories but in generic you can use the following commands:

Usage

To start a cluster go to a subdirectory and start the cluster:

docker-compose up -d

You can check the logs of all the components with:

docker-compose logs

In case of a problem you can destroy the cluster an delete all the local state with:

docker-compose down

(Note: a simple docker-compose stop may not delete all the local data).

You can scale up and down the components:

docker-compose scale datanode=5

Usually the key webui ports are published on the docker host.

Known issues

The base image used here is apache/hadoop-runner, which runs with JDK8 by default. You may run with JDK11 by specify apache/hadoop-runner:jdk11 as base image in simple mode. But in secure mode, JDK 11 is not fully supported yet due to JDK8 dependencies from hadoop-common jars.