Installing Cloudflow
This guide shows how to install Cloudflow, using the Helm
and kubectl
command-line tools.
It also shows how to install Kafka, Spark, and Flink operators that integrate with Cloudflow.
Installing Cloudflow
In this guide, we will use Helm to install Cloudflow.
The first step is to create the namespace, if it does not exist yet, to install Cloudflow into:
kubectl create ns cloudflow
Many subsequent commands will assume that the namespace is cloudflow .
|
First, we add the Cloudflow Helm repository and update the local index:
helm repo add cloudflow-helm-charts https://lightbend.github.io/cloudflow-helm-charts/ helm repo update
Installing (with support for Spark or Flink)
For use with Spark or Flink, the persistentStorageClass
parameter sets the storage class to nfs
. This storage class, as mentioned in Installation Prerequisites, has to be of the type ReadWriteMany
. In our example, we are using the nfs
storage class.
cloudflow_operator.persistentStorageClass=nfs
The kafkaBootstrapservers
parameter sets the address and port of the Kafka cluster that Cloudflow will use. In this example, we have used the address of a Strimzi created Kafka cluster located in the cloudflow
namespace.
cloudflow_operator.kafkaBootstrapservers=cloudflow-strimzi-kafka-bootstrap.cloudflow:9092
The following command installs Cloudflow using the Helm chart:
helm install cloudflow cloudflow-helm-charts/cloudflow --namespace cloudflow \ --set cloudflow_operator.persistentStorageClass=nfs \ --set cloudflow_operator.kafkaBootstrapservers=cloudflow-strimzi-kafka-bootstrap.cloudflow:9092
Installing (no support for Spark or Flink)
The kafkaBootstrapservers
parameter sets the address and port of the Kafka cluster that Cloudflow will use. In this example, we have used the address of a Strimzi created Kafka cluster located in the cloudflow
namespace.
cloudflow_operator.kafkaBootstrapservers=cloudflow-strimzi-kafka-bootstrap.cloudflow:9092
The following command installs Cloudflow using the Helm chart:
helm install cloudflow cloudflow-helm-charts/cloudflow --namespace cloudflow \ --set cloudflow_operator.kafkaBootstrapservers=cloudflow-strimzi-kafka-bootstrap.cloudflow:9092
Verifying the installation
Check the status of the installation process using kubectl
. When the Cloudflow operator pod is in Running
status, the installation is complete.
$ kubectl get pods -n cloudflow NAME READY STATUS RESTARTS AGE cloudflow-operator-6b7d7cbdfc-xb6w5 1/1 Running 0 10s
You can now deploy an Akka-based Cloudflow application into the cluster as it only requires Kafka to be set up. More on this in the development section of the documentation.
Adding Spark support
If you plan to write applications that utilize Spark, you will need to install the Spark operator before deploying your Cloudflow application.
Continue with Adding Spark Support.
Adding Flink support
If you plan to write applications that utilize Flink you will need to install the Flink operator before deploying your Cloudflow application.
Continue with Adding Flink Support.
Next: Installing Enterprise components
Continue with Installing Cloudflow Enterprise components