How to deploy DataStax Enterprise on OpenEBS storage and use litmus for chaos Engineering.

Giridharaprasad
3 min readApr 16, 2019

--

DataStax Enterprise is based on Apache Cassandra, was designed to solve the problems of scale and replication by embracing a masterless architecture that scales linearly on commodity servers. The application can read or write to any node in the cluster and no single point of failure exists.

OpenEBS is an open-source project for container-attached and container-native storage on Kubernetes. It implements granular storage policies and isolation that enable users to optimize storage for each specific workload.

Litmus is a framework capable of performing e2e testing and chaos engineering for Kubernetes, focusing on stateful workloads. The primary objective of Litmus is to ensure consistent and reliable behaviour of Kubernetes objects for various persistent workloads and to catch hard-to-find or unacceptable issues. It provides “Litmus experiments” or “Chaos experiments” which are essentially Kubernetes jobs running test containers. Litmus can inject chaos into the specified K8s object and observe the impact and behavior.

In this blog, we can see how Litmus can be used to deploy DSE in OpenShift Cluster consuming OpenEBS Persistent volumes.

Pre-requisites

  • Litmus framework has to be configured in the target cluster. It can be set up by following the steps provided in this document.
  • Create a namespace for running DataStax components.
oc create ns app-datastax
  • Run the following command to allow security context to the pods in the above application namespace.
oc adm policy add-scc-to-user anyuid system:serviceaccount:app-datastax:default
  • Ensure that the OpenEBS cStor Storage pool has been created. If not, use the below claim template to create the pool.

Populate the above template with your requirement and apply this YAML to create Storage Pool.

  • The next step is to create OpenEBS Storage class using above storage pool. Users can use the below manifest to create storage class.

Populate the above template with storage class name and storage pool name. Then, apply the manifest to create the desired storage class.

Deploying DSE using Litmus and OpenEBS

The Litmus-experiment for deploying DataStax Enterprise is as follows:

The User can modify environmental variables in the above litmus experiment based on their requirements.

To install DSE, create litmus job using the above manifest,

oc create -f run_litmus_test.ymljob.batch "litmus-datastax-hxfwn" created

It used to take some time to complete the configuration. This litmus experiment is capable of installing both DSE and OpsCenter components.

Once deployed successfully, You can list objects using the below command.

> oc get pods -n <namespace>
NAME READY STATUS RESTARTS AGE
dse-0 1/1 Running 0 2h
dse-1 1/1 Running 0 2h
dse-2 1/1 Running 0 2h
opscenter-0 1/1 Running 0 2h

The services created by above litmus experiment can be viewed through the following command.

> oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dse ClusterIP None <none> 9042/TCP 2h
opscenter ClusterIP None <none> 8888/TCP,8443/TCP,61620/TCP 2h
opscenter-ext-lb LoadBalancer 172.24.216.201 172.29.250.2,172.29.250.2 8443:31793/TCP 2h

Now you are set to use DataStax Enterprise.

User can create OpenShift route to access OpsCenter externally.

oc expose service/<serviceName> --hostname=<domainName> -n <nameSpace>

Generate the certificate using the following command,

sudo certbot --nginx -d <domainName> -d <domainName>

This will create the certificates in the following directory,

/etc/letsencrypt/live/<domainName>

Add the certificates to the route of the application through OpenShift web console.

Finally, you can access the OpsCenter console through the domain name.

--

--

Giridharaprasad
Giridharaprasad

Written by Giridharaprasad

Software Engineer at Mayadata Inc.

No responses yet