Signed-off-by: Anu Engineer <aengineer@apache.org>
2.6 KiB
title | linktitle | weight | summary |
---|---|---|---|
Running concurrently with HDFS | Runing with HDFS | 1 | Ozone is designed to run concurrently with HDFS. This page explains how to deploy Ozone in a exisiting HDFS cluster. |
Ozone is designed to work with HDFS. So it is easy to deploy ozone in an existing HDFS cluster.
The container manager part of Ozone can run inside DataNodes as a pluggable module or as a standalone component. This document describe how can it be started as a HDFS datanode plugin.
To activate ozone you should define the service plugin implementation class.
{{< highlight xml >}} dfs.datanode.plugins org.apache.hadoop.ozone.HddsDatanodeService {{< /highlight >}}
You also need to add the ozone-datanode-plugin jar file to the classpath:
{{< highlight bash >}} export HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar {{< /highlight >}}
To start ozone with HDFS you should start the the following components:
- HDFS Namenode (from Hadoop distribution)
- HDFS Datanode (from the Hadoop distribution with the plugin on the classpath from the Ozone distribution)
- Ozone Manager (from the Ozone distribution)
- Storage Container Manager (from the Ozone distribution)
Please check the log of the datanode whether the HDDS/Ozone plugin is started or not. Log of datanode should contain something like this:
2018-09-17 16:19:24 INFO HddsDatanodeService:158 - Started plug-in org.apache.hadoop.ozone.web.OzoneHddsDatanodeService@6f94fb9d