Back

Getting Started with evroc's CSI Driver: Persistent Storage for Your Kubernetes Cluster

#Engineering

If you are deploying Kubernetes on evroc cloud you are most likely needing persistent storage that can move with the pods it is serving. For this you need the evroc CSI driver.

In this post, I'll walk you through what CSI drivers are, why they matter, and specifically how evroc's CSI driver bridges the gap between Kubernetes and evroc's block storage. By the end, you'll have a working 3-node cluster with nodes spread across availability zones, with fully functional persistent storage.

What is a CSI Driver, and Why Should You Care?

The Problem: Storage in the Early Kubernetes Days

Back when Kubernetes was young, storage was... complicated. If you wanted to use a storage provider, whether AWS EBS, Google Persistent Disks, or a local NFS server, that support had to be baked directly into Kubernetes itself. These were called "in-tree" drivers, and they came with some serious drawbacks:

  • Release cycle coupling: Found a bug in the AWS storage driver? You'd need to wait for the next Kubernetes release to get the fix.
  • Vendor lock-in: Adding support for a new storage provider meant modifying Kubernetes core code, not exactly accessible for most organizations.
  • Code bloat: The Kubernetes codebase grew larger and more complex with every new storage backend.

The Solution: Container Storage Interface

The Container Storage Interface (CSI) changed everything. It's a standardized API specification that decouples storage drivers from Kubernetes itself. Think of it as a contract: Kubernetes says "I need to create a volume," and the CSI driver says "I'll handle that with my specific storage backend."

Now storage vendors can:

  • Ship and update their drivers independently
  • Fix bugs without waiting for Kubernetes releases
  • Add features on their own timeline

For you, the cluster operator, this means more options, faster updates, and a consistent experience regardless of which storage provider you're using.

How CSI Works: The Two Components

A CSI driver isn't a single thing, it's actually two complementary components working together:

1. The Controller Plugin

This runs as a Deployment (typically one or more replicas) and handles the "big picture" storage operations:

  • Creating and deleting volumes
  • Attaching and detaching volumes from nodes
  • Taking snapshots (if supported)
  • Managing volume expansion (if supported)

The controller talks to your cloud provider's API. In evroc's case, it communicates with the evroc REST API to create Disks and manage HotSwapDiskAttachments.

2. The Node Plugin

This runs as a DaemonSet on every node in your cluster and handles the local operations:

  • Mounting volumes into the host filesystem
  • Formatting new volumes (creating filesystems)
  • Unmounting volumes when pods are deleted

When Kubernetes schedules a pod that needs storage, the node plugin on that specific host makes the volume available to the container.

evroc's CSI Driver: The Specifics

Now that we understand CSI in general, let's talk about evroc's implementation. The evroc CSI driver is a production-ready implementation that bridges Kubernetes with evroc's block storage platform.

What It Does

The driver implements the full CSI specification and integrates directly with evroc's infrastructure:

Volume Lifecycle Management

  • Provisioning: When you create a PersistentVolumeClaim in Kubernetes, the driver automatically creates an evroc Disk
  • Attachment: Uses evroc's HotSwapDiskAttachment feature to attach disks to running VMs without rebooting
  • Mounting: Formats disks with ext4 and mounts them into your pods
  • Cleanup: Detaches and deletes disks when they're no longer needed

Topology Awareness

  • evroc operates across multiple availability zones (in Stockholm: se-sto-a, se-sto-b, se-sto-c)
  • The driver is zone-aware and creates volumes in the same zone as your pods
  • This is crucial because evroc disks can only attach to VMs in the same zone

Authentication

  • Uses OIDC/OAuth2 with automatic token refresh
  • Integrates with evroc's IAM for secure access control

Current Features

FeatureStatus
Dynamic volume provisioningSupported
Topology-aware schedulingSupported
ReadWriteOnce access modeSupported
ReadWriteOncePod access modeSupported
ReadOnlyMany (filesystem)Supported
Block volumes (raw devices)Supported
ext4 filesystemSupported
Volume snapshotsComing soon
Volume expansionComing soon
Volume cloningComing soon

Prerequisites

Before we begin, ensure you have:

  1. evroc CLI installed and configured - You need the CLI to create VMs and manage infrastructure, please download it here
  2. SSH key pair - You'll need an SSH key to access the VMs (we'll use ~/.ssh/id_ed25519)
  3. Sufficient quota - For 3 VMs (a1a.m profile), 3 public IPs, and storage volumes
  4. Robot account - The CSI driver requires a robot account with admin permissions on your project, please request one from support@evroc.com if you don't yet have one

Complete Setup Guide: 3-Node Multi-Zone Cluster with CSI

This guide will take you from zero to a fully functional 3-zone Kubernetes cluster with working persistent storage.

Architecture Overview

Region: se-sto
Zone: a
k8s-node-a
(Control Plane)
Private IP:
10.0.1.2
Zone: b
k8s-node-b
(Worker)
Private IP:
10.0.2.2
Zone: c
k8s-node-c
(Worker)
Private IP:
10.0.3.2
Kubernetes Cluster
• All communication on private IPs (10.0.x.x)
• Calico CNI for pod networking
• evroc CSI Driver for storage

Step 1: Create the Project

First, create a dedicated project for this demo:

bash
# Log in to evroc cloud
evroc login

# Create project
evroc iam project create csi-demo --name "CSI Driver Demo"

# Set as current project
evroc config set-project csi-demo

Step 2: Create the k8s-internal Security Group (CRITICAL!)

Important Without this security group, VMs in different zones cannot communicate on their private IPs, which breaks Kubernetes networking.

The security group needs a self-referencing rule that allows ALL protocols (required for Calico's IP-in-IP encapsulation):

bash
# Create the security group
evroc networking securitygroup create k8s-internal

# Add self-referencing rule for all protocols
evroc networking securitygroup addrule k8s-internal \
  --name=allow-self-traffic \
  --direction=Ingress \
  --security-group=k8s-internal \
  --protocol=All

Step 3: Create Boot Disks

Create boot disks in each zone using Ubuntu 24.04:

bash
# Create boot disk for zone a
evroc compute disk create k8s-node-a-boot \
  --zone=a \
  --image=ubuntu.24-04.1

# Create boot disk for zone b
evroc compute disk create k8s-node-b-boot \
  --zone=b \
  --image=ubuntu.24-04.1

# Create boot disk for zone c
evroc compute disk create k8s-node-c-boot \
  --zone=c \
  --image=ubuntu.24-04.1

Step 4: Create Public IPs

Create public IPs for SSH access to each VM:

bash
evroc networking publicip create k8s-node-a-ip
evroc networking publicip create k8s-node-b-ip
evroc networking publicip create k8s-node-c-ip

Step 5: Create the VMs

Create the 3 VMs across zones, applying all security groups:

Node in zone a (Control Plane):

bash
evroc compute virtualmachine create k8s-node-a \
  --zone=a \
  --running=true \
  --vm-virtual-resources-ref=a1a.m \
  --disk=k8s-node-a-boot \
  --boot-from=true \
  --public-ip=k8s-node-a-ip \
  --ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
  --security-group=default-allow-ssh \
  --security-group=default-allow-egress \
  --security-group=default-allow-web-protocols \
  --security-group=k8s-internal

Node in zone b (Worker):

bash
evroc compute virtualmachine create k8s-node-b \
  --zone=b \
  --running=true \
  --vm-virtual-resources-ref=a1a.m \
  --disk=k8s-node-b-boot \
  --boot-from=true \
  --public-ip=k8s-node-b-ip \
  --ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
  --security-group=default-allow-ssh \
  --security-group=default-allow-egress \
  --security-group=default-allow-web-protocols \
  --security-group=k8s-internal

Node in zone c (Worker):

bash
evroc compute virtualmachine create k8s-node-c \
  --zone=c \
  --running=true \
  --vm-virtual-resources-ref=a1a.m \
  --disk=k8s-node-c-boot \
  --boot-from=true \
  --public-ip=k8s-node-c-ip \
  --ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
  --security-group=default-allow-ssh \
  --security-group=default-allow-egress \
  --security-group=default-allow-web-protocols \
  --security-group=k8s-internal

Wait for VMs to be ready (this takes about 2-3 minutes):

bash
# Check VM status
evroc compute virtualmachine list

You should see all three VMs with status Ready: True.

Get the public IPs for SSH access:

bash
# Get the public IPs
echo "k8s-node-a: $(evroc compute virtualmachine get k8s-node-a | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"
echo "k8s-node-b: $(evroc compute virtualmachine get k8s-node-b | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"
echo "k8s-node-c: $(evroc compute virtualmachine get k8s-node-c | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"

Note: Save these IPs - you'll need them for the next steps.

Step 6: Install Kubernetes

Now install Kubernetes on all 3 VMs. SSH into each and run the installation commands. Important: You must install Kubernetes on all 3 VMs before initializing the cluster. Do not initialize the control plane until all nodes have Kubernetes installed.

On k8s-node-a (replace with the actual public IP):

bash
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip>

Then run these commands on the VM:

bash
# Update packages
sudo apt-get update

# Install prerequisites
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release

# Install containerd
sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd

# Add Kubernetes apt repository
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubeadm, kubelet, kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable kubelet

# Configure sysctl for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
sudo sysctl --system

# Load kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter

NOTE! Repeat the same commands on k8s-node-b and k8s-node-c.

Step 7: Initialize the Control Plane

On k8s-node-a, get the private IP and initialize the cluster:

bash
# Get the private IP
PRIVATE_IP=$(hostname -I | awk '{print $1}')
echo "Private IP: $PRIVATE_IP"  # Should be 10.0.1.2

# Initialize the cluster (CRITICAL: Use private IP!)
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=$PRIVATE_IP \
  --control-plane-endpoint=$PRIVATE_IP:6443 \
  --node-name=k8s-node-a

Set up kubeconfig:

bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install Calico CNI:

bash
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml

# Wait for Calico to be ready
kubectl rollout status daemonset/calico-node -n kube-system --timeout=120s

Step 8: Join Worker Nodes

Get the join command from the control plane:

bash
# On k8s-node-a
sudo kubeadm token create --print-join-command

This outputs something like:

plaintext
kubeadm join 10.0.1.2:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

On k8s-node-b:

bash
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-b-ip>

Then run the join command (replace with your actual token):

bash
sudo kubeadm join 10.0.1.2:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash> \
  --node-name=k8s-node-b

On k8s-node-c:

bash
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-c-ip>

Then run:

bash
sudo kubeadm join 10.0.1.2:6443 \
  --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash> \
  --node-name=k8s-node-c

Verify all nodes are ready:

bash
# On k8s-node-a
kubectl get nodes -o wide

You should see:

plaintext
NAME         STATUS   ROLES           VERSION   INTERNAL-IP
k8s-node-a   Ready    control-plane   v1.35.x   10.0.1.2
k8s-node-b   Ready    <none>          v1.35.x   10.0.2.2
k8s-node-c   Ready    <none>          v1.35.x   10.0.3.2

Step 9: Label Nodes with Zones

The CSI driver needs to know which zone each node is in:

bash
# On k8s-node-a
kubectl label node k8s-node-a topology.kubernetes.io/zone=a
kubectl label node k8s-node-b topology.kubernetes.io/zone=b
kubectl label node k8s-node-c topology.kubernetes.io/zone=c

Verify:

bash
kubectl get nodes --label-columns=topology.kubernetes.io/zone

Step 10: Grant Robot Account Admin Permissions

NOTE: If you don't have a robot account for the CSI driver you can request this from support@evroc.com.

Critical step: The CSI driver robot account needs admin permissions on your project to create disks and attachments.

On your local machine (where evroc CLI is configured):

bash
evroc iam permissionset create csi-robot-admin \
  --admin \
  --email=csi-driver-robot-name@evroc.com

Without this, the CSI driver will get "403 Forbidden - not admin" errors when trying to create volumes.

Step 11: Install Helm on the Control Plane

bash
# On k8s-node-a
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip>

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Step 12: Configure CSI Driver Credentials

Create the configuration secret. Important: The secret must have a config.yaml key with YAML content, not individual environment variables.

On your local machine:

bash
# Create the config file
cat > /tmp/csi-config.yaml <<EOF
evroc:
  organization: "<your organisation id>"
  project: "csi-demo"
auth:
  username: "csi-driver-robot-name@evroc.com"
  password: "<your auth token>"
infrastructure:
  region: "se-sto"
EOF

# Copy to control plane
scp -i ~/.ssh/id_ed25519 /tmp/csi-config.yaml \
  evroc-user@<k8s-node-a-ip>:/tmp/

# SSH to control plane and create secret
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip> \
  "kubectl create secret generic evroc-credentials \
    --namespace=kube-system \
    --from-file=config.yaml=/tmp/csi-config.yaml"

Step 13: Install the evroc CSI Driver

bash
# On k8s-node-a (SSH session)
helm install evroc-csi-driver \
  https://github.com/evroc-oss/evroc-csi-driver/releases/download/v0.1.6/evroc-csi-driver-0.1.6.tgz \
  --namespace kube-system \
  --set evroc.existingConfigSecret=evroc-credentials

Verify the installation:

bash
kubectl get pods -n kube-system -l app.kubernetes.io/name=evroc-csi-driver

You should see:

plaintext
NAME                                           READY   STATUS
evroc-csi-driver-controller-...                3/3     Running
evroc-csi-driver-node-...                      2/2     Running  (on each node)

Also verify:

bash
kubectl get csidriver
# Should show: disk.csi.evroc.com

kubectl get storageclass
# Should show: evroc-standard (default)

Step 14: Test Persistent Storage

Create a test PVC and pod:

bash
# On k8s-node-a (SSH session)

# Create a test PVC
cat > /tmp/test-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: evroc-standard
EOF

kubectl apply -f /tmp/test-pvc.yaml

# Create a test pod
cat > /tmp/test-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test
    image: busybox
    command: ["sh", "-c", "echo 'Hello from evroc CSI!' > /data/test.txt && sleep 3600"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: test-pvc
EOF

kubectl apply -f /tmp/test-pod.yaml

# Wait for the pod to be ready
kubectl wait --for=condition=Ready pod/test-pod --timeout=120s

# Verify the PVC is bound
kubectl get pvc test-pvc

# Read the test data
kubectl exec test-pod -- cat /data/test.txt

You should see: Hello from evroc CSI!

Verify the disk was created in evroc (from your local machine):

bash
evroc compute disk list

You should see a disk named something like: csi-pvc-<uuid>

Verification Checklist

Run through this checklist on node A to verify everything is working:

bash
# 1. All nodes are Ready
kubectl get nodes
# Should show 3 nodes in Ready state

# 2. All system pods are Running
kubectl get pods -n kube-system
# Should show Calico, CoreDNS, and CSI pods all Running

# 3. CSI driver is registered
kubectl get csidriver
# Should show disk.csi.evroc.com

# 4. StorageClass exists
kubectl get storageclass
# Should show evroc-standard (default)

# 5. CSI controller is working
kubectl logs -n kube-system deployment/evroc-csi-driver-controller -c evroc-csi-driver --tail=20
# Should show successful API calls (status 200), not 403 Forbidden

# 6. PVC is Bound
kubectl get pvc test-pvc
# Should show STATUS: Bound

# 7. Data persists
echo "test data" | kubectl exec -i test-pod -- tee /data/persist.txt
kubectl delete pod test-pod
kubectl apply -f /tmp/test-pod.yaml
kubectl wait --for=condition=Ready pod/test-pod --timeout=60s
kubectl exec test-pod -- cat /data/persist.txt
# Should show: test data

Key Considerations

Here are some important design patterns to keep in mind when working with the evroc platform and CSI driver:

1. Configure Security Groups for Inter-Zone Communication

By default, evroc's security model requires explicit configuration for cross-zone networking. This security-first approach ensures traffic is only allowed where you specifically permit it. When deploying Kubernetes across multiple zones, please remember to create a security group with a self-referencing rule:

bash
evroc networking securitygroup create k8s-internal
evroc networking securitygroup addrule k8s-internal \
  --name=allow-self-traffic \
  --direction=Ingress \
  --security-group=k8s-internal \
  --protocol=All

This design allows you to precisely control which resources can communicate across zones while maintaining a secure default posture.

2. Use Private IPs for Kubernetes Control Plane

evroc's networking is designed with a private-first architecture. For optimal performance and security, please remember to use private IP addresses when initializing your Kubernetes control plane:

bash
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=<PRIVATE_IP> \
  --control-plane-endpoint=<PRIVATE_IP>:6443

3. CSI Driver Configuration Format

The evroc CSI driver uses a structured YAML configuration file for clean, maintainable credential management. Please remember to format your secret with a config.yaml key containing the configuration in YAML format:

yaml
stringData:
  config.yaml: |
    evroc:
      organization: "..."
      project: "csi-demo"
    auth:
      username: "csi-driver-robot-name@evroc.com"
      password: "..."

4. Grant Appropriate Permissions to Service Accounts

evroc follows the principle of least privilege with explicit permission grants. Please remember to create a permission set with the appropriate administrative access for your CSI driver service account:

bash
evroc iam permissionset create csi-robot-admin \
  --admin \
  --email=csi-driver-robot-name@evroc.com

Cleanup

When you're done, tear everything down:

bash
# Delete test resources
kubectl delete pod test-pod --ignore-not-found
kubectl delete pvc test-pvc --ignore-not-found

# Delete VMs (run from local machine)
evroc compute virtualmachine delete k8s-node-a
evroc compute virtualmachine delete k8s-node-b
evroc compute virtualmachine delete k8s-node-c

# Wait for VMs to be deleted, then delete disks
evroc compute disk delete k8s-node-a-boot
evroc compute disk delete k8s-node-b-boot
evroc compute disk delete k8s-node-c-boot

# Delete public IPs
evroc networking publicip delete k8s-node-a-ip
evroc networking publicip delete k8s-node-b-ip
evroc networking publicip delete k8s-node-c-ip

# Delete project
evroc iam project delete csi-demo

Troubleshooting

Issue: Nodes can't join cluster

Symptoms: Worker nodes stuck in NotReady state. Solution: Check security groups. Ensure k8s-internal is applied to all VMs with protocol=All.

Issue: CSI controller shows CrashLoopBackOff

Symptoms: Controller pod restarting repeatedly. Solution: Check logs with kubectl logs. Most likely causes:

  • Secret not in correct format (needs config.yaml key)
  • Robot account lacks admin permissions (403 errors)

Issue: PVC stuck in Pending

Symptoms: PVC never binds to a volume. Solution:

  • Check kubectl describe pvc for events
  • Verify CSI controller is running
  • Check controller logs for API errors

Issue: Pod can't mount volume

Symptoms: Pod stuck in ContainerCreating. Solution: Check CSI node pod logs on the node where pod is scheduled.

Summary

You now have a production-ready 3-zone Kubernetes cluster with:

  • Nodes across zones a, b, c
  • Private IP networking (10.0.1.2, 10.0.2.2, 10.0.3.2)
  • Calico CNI working
  • evroc CSI driver operational
  • Working PVC/PV provisioning
  • Data persistence verified

Happy Kuberneting!

The European Cloud

A better cloud. Built for AI.

Evroc logo

The European Cloud

© 2026 evroc AB
Cloud 1Cloud 2