Deploy and Use OpenEBS Container Storage on Kubernetes


Managing storage and volumes on a Kubernetes cluster can be challenging for many engineers. Setting up Persistent volumes and dynamic allocation of the same can be made easy by the tool we are going to explore today, OpenEBS.
OpenEBS is a cloud native storage project originally created by MayaData that build on a Kubernetes cluster and allows Stateful applications to access Dynamic Local PVs and/or replicated PVs. OpenEBS runs on any Kubernetes platform and uses any Cloud storage solution including AWS s3, GKE and AKS.
OpenEBS adopts the Container Attached Storage (CAS) architecture in such a way that the volumes provisioned through OpenEBS are containerized.
OpenEBS utilizes disks attached to the worker nodes, external mount-points and local host paths on the nodes.
Features of OpenEBS
- Containerized storage: OpenEBS volumes are always containerized as it used Container Attached Storage architecture.
- Synchronous replication: OpenEBS can synchronously replicate data volumes when used wirh Cstor, Jiva or Mayastor for high availability of stateful applications.
- Snapshots: Snapshots are created instantaneously when using Cstor. This makes it easy for data migration within the Kubernetes cluster.
- Backup and restore: The backup and restore feature works with Kubernetes solutions such as Velero. You can backup data to object storage such as AWS s3 and GCP.
- Prometheus Metrics: OpenEBS volumes are configured to generate granular data metrics e.g throughput, latency and IOPS. this can easily be shipped via prometheus data exporter and displayed on a Grafana dashboard for monitoring of the cluster health, disk failures and utilization.

OpenEBS Architecture
OpenEBS uses the CAS model. This means that each volume has a dedicated controller POD and a set of replica PODs.
OpenEBS has the following components:
- Data plane components – cStor, Jiva and LocalPV
The data plane is reponsible for the actual IO path of the persistent volume. You can choose between the three storage engines discussed below depending on your workloads and preferences.
- cStor – This is the preferred storage engine for OpenEBS as it offers enterprise-grade features such as snapshots, clones, thin provisioning, data consistency and scalability in capacity. This in turn allows Kubernetes stateful deployments to work with high availability. cStor is designed to have three replicas whereby data is written synchronously to the replicas hence allowing pods to retain data during terminating and rescheduling.
- Jiva – Jiva runs exclusively in the user space with block storage capabilities such as synchronous replication. This option is ideal in situations where you have applications running on nodes that might not be able to add more block storage devices. This however is not ideal for mission-critical applications that require high performance storage capabilities.
- LocalPV – This is the simplest storage engine of the three. A Local Persistent Volume is a directly-attached volume to a Kubernetes node. OpenEBS can make use of a locally attached disk or a path (mount-point) to provision persistent volumes to the k8s cluster. This is ideal in situations where you are running applications that do not require advanced storage capabilities such as snapshots and clones.
The table below highlights the features available on each storage engine discussed above.

- Control plane components – volume exports, volume sidecars, API server and Provisioner
The Control plane is reponsible for volume operations such as provisioning volumes, making clones, exporting volume metrics and enforcing volume policies.

- Node disk manager (NDM) – Used for discovery, monitoring and management of the media/disks attached on Kubernetes nodes.
Node Disk Manager is the tool used to manage persistent storage in Kubernetes for stateful applications. This brings flexibility in the management of the storage stack by unifying disks and creating pools and identifying them as kubernetes objects.
NDM discovers, provisions, manages and monitors the underlying disks for PV provisioners like OpenEBS and prometheus.
How to Setup OpenEBS on Kubernetes Cluster
This article will discuss how to setup OpenEBS on a kubernetes cluster. At the end of this article, we shall have covered the following:
- Setup OpenEBS on Kubernetes
- Provision Persistent Volumes on Kubernetes using OpenEBS
- Provision Storage classes (SC) and Persistent Volume Claims (PVC) on Kubernetes.
Installing OpenEBS on Kubernetes
Before we can start our installation, we have to make sure that iSCSI client is installed and running on all the nodes. This is necessary for Jiva and cStor setups.
Verify that iscsid service is running on your nodes, otherwise, install it.
Ubuntu/Debian
sudo apt update && sudo apt install open-iscsi
sudo systemctl enable --now iscsid
systemctl status iscsidRedHat/CentOS
yum install iscsi-initiator-utils -y
sudo systemctl enable --now iscsid
systemctl status iscsidMethod 1: Install OpenEBS on Kubernetes using Helm
We can deploy OpenEBS through Helm Charts. You have to check the version of Helm installed on your system.
$ helm version
version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.19.3"}For Helm 2, Install the OpenEBS chart using the commands below:
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebsHelm 3:
For Helm 3, we need to create the openebs namespace before we can deploy the chat:
$ kubectl create ns openebs
namespace/openebs createdDeploy openebs from helm chart.
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs openebs openebs/openebsMethod 2: Install OpenEBS on Kubernetes through Kubectl
We can use Kubectl to install OpenEBS
Create openebs namespace
kubectl create ns openebsInstall OpenEBS:
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yamlVerify Pods
After a successful installation, verify that the pods are up:
kubectl get pods -n openebsScreenshot with command output showing running pods.

Verify Stoage Classes (SC)
Ensure that the default storage classes (SC) have been created.
$ kubectl get scYou will see storage classes created.
root@bazenga:~# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  11m
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  11m
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  11m
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  11mVerify Block Device CRs
OpenEBS NDM daemon-sets identifies the available block devices on the nodes and creates a CR for each. All the disks available on the nodes will be identified unless you had specified an exclusion in the vendor-filter and path-filter of the NDM config-map.
kubectl get blockdevice -n openebse.g
root@bazenga:~# kubectl get blockdevice 
NAME                                           NODENAME   SIZE          CLAIMSTATE   STATUS   AGE
blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0   node02     10736352768   Unclaimed    Active   15m
blockdevice-59c0818b5f8b2e56028959d921221af2   node03     10736352768   Unclaimed    Active   15m
blockdevice-79b8a6c83ee34a7e4b55e8d23f14323d   node03     21473771008   Unclaimed    Active   15mTo verify which node a device CR belongs to, run the describe command.
kubectl describe blockdevice <blockdevice-cr> -n openebsExample:
$ kubectl describe blockdevice blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0 -n openebs
..........
Spec:
  Capacity:
    Logical Sector Size:   512
    Physical Sector Size:  512
    Storage:               10736352768
  Details:
    Compliance:            
    Device Type:           partition
    Drive Type:            SSD
    Firmware Revision:     
    Hardware Sector Size:  512
    Logical Block Size:    512
    Model:                 
    Physical Block Size:   512
    Serial:                
    Vendor:                
  Devlinks:
  Filesystem:
  Node Attributes:
    Node Name:  node02
  Partitioned:  No
  Path:         /dev/xvdb1
Status:
  Claim State:  Unclaimed
  State:        Active
Events:         <none>
Working with Storage Engines
As we had discussed before, OpenEBS provides three storage engines that one can choose to work with.
The engines are:
- cStor
- Jiva
- Local PV
We shall discuss how to use the three storage engines.
Persistent volumes using cStor
For cStor, there are a number of operations needed to provision a persistent volume that utilizes this feature. These are:
- Create cStor storage pools
- Create cStor storage classes
- Provision a cStor volume
We will go through the steps to achieve the above.
Step 1 – Create cStor Storage pool
The storage pool is created through specifying the block devices on the nodes. Use the steps below to create a cStor storage pool.
- Get the details of the block devices attached to the k8s cluster
kubectl get blockdevice -n openebs -o jsonpath='{ range .items[*]} {.metadata.name}{"\n"}{end}'Example:
root@bazenga:~# kubectl get blockdevice -o jsonpath='{ range .items[*]} {.metadata.name}{"\n"}{end}'
 blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0
 blockdevice-59c0818b5f8b2e56028959d921221af2
 blockdevice-79b8a6c83ee34a7e4b55e8d23f14323dIdentify the unclaimed blockdevices:
$ kubectl get blockdevice -n openebs | grep UnclaimedExample:
root@bazenga:~# kubectl get blockdevice | grep Unclaimed
blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0   node02     10736352768   Unclaimed    Active   82m
blockdevice-59c0818b5f8b2e56028959d921221af2   node03     10736352768   Unclaimed    Active   82m
blockdevice-79b8a6c83ee34a7e4b55e8d23f14323d   node03     21473771008   Unclaimed    Active   82m
- Create a StoragePoolClaim YAML file specifying the PoolResourceRequestsandPoolResourceLimits. These values specify the minimum and maximum resources that will be allocated to the volumes depending on the available resources on the nodes. You will also specify the blockdevices in your cluster at theblockDeviceList
vim cstor-pool1-config.yaml Add content below, replacing the block devices with yours.
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
  name: cstor-disk-pool
  annotations:
    cas.openebs.io/config: |
      - name: PoolResourceRequests
        value: |-
            memory: 2Gi
      - name: PoolResourceLimits
        value: |-
            memory: 4Gi
spec:
  name: cstor-disk-pool
  type: disk
  poolSpec:
    poolType: striped
  blockDevices:
    blockDeviceList:
    - blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0
    - blockdevice-59c0818b5f8b2e56028959d921221af2- Apply the configuration.
kubectl apply -f cstor-pool1-config.yaml- Verify thet a cStor pool configuation has been created.
kubectl get spcDesired output:
root@bazenga:~# kubectl get spc
NAME              AGE
cstor-disk-pool   76sVerify that the cStore pool was successfully created:
$ kubectl get cspOutput:
root@bazenga:~# kubectl get csp
NAME                   ALLOCATED   FREE   CAPACITY   STATUS   READONLY   TYPE      AGE
cstor-disk-pool-4wgo   101K        9.94G   9.94G      Healthy False      striped   35s
cstor-disk-pool-v4sh   101K        9.94G   9.94G      Healthy False      striped   35s                              Verify that the cStor pool pods are running on nodes.
$ kubectl get pod -n openebs | grep -i <spc_name>Example:
root@bazenga:~# kubectl get pod -n openebs | grep cstor-disk-pool
cstor-disk-pool-4wgo-bd646764d-7f82v          3/3     Running   0          12m
cstor-disk-pool-v4sh-78dc8c4c7c-7gwhx         3/3     Running   0          12mWe can now use the cStor storage pools to provision cStor volumes.
Step 2 – Create cStor StorageClass
We need to provision a StorageClass out of the StoragePool we created. This will be used for the volume claims.
In the StorageClass, you will also be required to determine the replicaCount for the application that will use the cStor volume.
The example below is for a deployment with two replicas.
vim openebs-sc-rep.yamlAdd below content:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-sc-statefulset
  annotations:
    openebs.io/cas-type: cstor
    cas.openebs.io/config: |
      - name: StoragePoolClaim
        value: "cstor-disk-pool"
      - name: ReplicaCount
        value: "2"
provisioner: openebs.io/provisioner-iscsiApply the configuration
kubectl apply -f openebs-sc-rep.yamlStep 3 – Create cStor Volume
We will then create a PersistentVolumeClaim called openebs-pvc.yaml using the storage class we defined above.
vim openebs-cstor-pvc.yamlAdd content below in the file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cstor-pvc
spec:
  storageClassName: openebs-sc-statefulset
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2GiApply the configuration:
kubectl apply -f openebs-cstor-pvc.yamlCheck if the PVC has been created:
root@bazenga:~# kubectl get pvc
NAME                      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
cstor-pvc                 Bound     pvc-8929c731-706d-4813-be6b-05099bc80df0   2Gi        RWO            openebs-sc-statefulset   12sAt this point, you can now deploy an application that will use the PersistentVolume under the storage class created.
root@bazenga:~# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS             REASON   AGE
pvc-8929c731-706d-4813-be6b-05099bc80df0   2Gi        RWO            Delete           Bound    default/cstor-pvc               openebs-sc-statefulset            10m
Provision Volumes using Jiva
Jiva is the second alternative for OpenEBS storage engines.
The required operations to create a Jiva volume are:
- Create a Jiva pool
- Create a storage class
- Create the persistent volume.
Step 1. Create a Jiva Pool
Jiva runs on disks that have been prepared (formatted) and mounted on the nodes. This means that you have to provision the disks on the nodes before setting up a Jiva pool.
The steps below will guide you through preparing a Jiva disk.
- Create the partition using fdisk.
sudo fdisk /dev/sd<device-label>- Make filesystem
sudo mkfs.ext4 /dev/<device-name>- Create mountpoint and mount the disk
sudo mkdir /home/openebs-gpd
sudo mount /dev/sdb  /home/openebs-gpdProceed to creating a Jiva pool using a jiva-gpd-yaml file as below.
vim jiva-gpd-yamlAdd content below:
 apiVersion: openebs.io/v1alpha1
 kind: StoragePool
 metadata:
   name: gpdpool            
   type: hostdir
 spec:
   path: "/home/openebs-gpd" Apply the configuration file to create your pool.
kubectl apply -f jiva-gpd-pool.yaml Step 2 – Create Jiva StorageClass
Create a SrorageClass that will be used for Persistent volume claims.
vim jiva-gpd-2repl-sc.yamlAdd content below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-jiva
  annotations:
    openebs.io/cas-type: jiva
    cas.openebs.io/config: |
      - name: ReplicaCount
        value: "2"
      - name: StoragePool
        value: gpdpool
provisioner: openebs.io/provisioner-iscsiApply the configuration:
kubectl apply -f jiva-gpd-2repl-sc.yamlStep 3 – Create Volume from Jiva
Create a jiva PVC with the config file below:
vim jiva-pvc.yamlAdd content below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: jiva-pvc
spec:
  storageClassName: openebs-jiva
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2GiThen deploy the PVC:
kubectl apply -f jiva-pvc.yamlProvision Volumes using LocalPV
LocalPV utilizes the host paths and the local disks. OpenEBS comes with a default StorageClass for Hostpath and device. You can however create your own SC if you want to specify a separate path on your host. This can be configured in the YAML file below:
vim custom-local-hostpath-sc.yamlAdd content below:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-hostpath
  annotations:
    openebs.io/cas-type: local
    cas.openebs.io/config: |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /var/local-hostpath
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerYou will be required to specify the Basepath value for the hostpath.
Apply the configuration:
kubectl apply -f custom-local-hostpath-sc.yamlTo verify that the LocalPV SC is running:
kubectl get scCreate a PVC for LocalPV
Create a pvc yaml file that uses the storage class created above or the default SC.
vim local-hostpath-pvc.yamlAdd the following:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-hostpath-pvc
spec:
  storageClassName: openebs-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5GApply the configuration:
kubectl apply -f local-hostpath-pvc.yamlThat’s all that is needed to provision volumes for OpenEBS using the three available storage engines. You can find more detailed documentation here.
Books For Learning Kubernetes Administration:
More on Kubernetes:
 
				 
					


