If you are deploying Kubernetes on evroc cloud you are most likely needing persistent storage that can move with the pods it is serving. For this you need the evroc CSI driver.
In this post, I'll walk you through what CSI drivers are, why they matter, and specifically how evroc's CSI driver bridges the gap between Kubernetes and evroc's block storage. By the end, you'll have a working 3-node cluster with nodes spread across availability zones, with fully functional persistent storage.
Back when Kubernetes was young, storage was... complicated. If you wanted to use a storage provider, whether AWS EBS, Google Persistent Disks, or a local NFS server, that support had to be baked directly into Kubernetes itself. These were called "in-tree" drivers, and they came with some serious drawbacks:
The Container Storage Interface (CSI) changed everything. It's a standardized API specification that decouples storage drivers from Kubernetes itself. Think of it as a contract: Kubernetes says "I need to create a volume," and the CSI driver says "I'll handle that with my specific storage backend."
Now storage vendors can:
For you, the cluster operator, this means more options, faster updates, and a consistent experience regardless of which storage provider you're using.
A CSI driver isn't a single thing, it's actually two complementary components working together:
1. The Controller Plugin
This runs as a Deployment (typically one or more replicas) and handles the "big picture" storage operations:
The controller talks to your cloud provider's API. In evroc's case, it communicates with the evroc REST API to create Disks and manage HotSwapDiskAttachments.
2. The Node Plugin
This runs as a DaemonSet on every node in your cluster and handles the local operations:
When Kubernetes schedules a pod that needs storage, the node plugin on that specific host makes the volume available to the container.
Now that we understand CSI in general, let's talk about evroc's implementation. The evroc CSI driver is a production-ready implementation that bridges Kubernetes with evroc's block storage platform.
The driver implements the full CSI specification and integrates directly with evroc's infrastructure:
Volume Lifecycle Management
Topology Awareness
se-sto-a, se-sto-b, se-sto-c)Authentication
| Feature | Status |
|---|---|
| Dynamic volume provisioning | Supported |
| Topology-aware scheduling | Supported |
| ReadWriteOnce access mode | Supported |
| ReadWriteOncePod access mode | Supported |
| ReadOnlyMany (filesystem) | Supported |
| Block volumes (raw devices) | Supported |
| ext4 filesystem | Supported |
| Volume snapshots | Coming soon |
| Volume expansion | Coming soon |
| Volume cloning | Coming soon |
Before we begin, ensure you have:
~/.ssh/id_ed25519)This guide will take you from zero to a fully functional 3-zone Kubernetes cluster with working persistent storage.
| Region: se-sto | ||
|
Zone: a
k8s-node-a
(Control Plane)
Private IP:
10.0.1.2 |
Zone: b
k8s-node-b
(Worker)
Private IP:
10.0.2.2 |
Zone: c
k8s-node-c
(Worker)
Private IP:
10.0.3.2 |
|
Kubernetes Cluster
• All communication on private IPs (10.0.x.x)
• Calico CNI for pod networking • evroc CSI Driver for storage | ||
First, create a dedicated project for this demo:
# Log in to evroc cloud
evroc login
# Create project
evroc iam project create csi-demo --name "CSI Driver Demo"
# Set as current project
evroc config set-project csi-demo
Important Without this security group, VMs in different zones cannot communicate on their private IPs, which breaks Kubernetes networking.
The security group needs a self-referencing rule that allows ALL protocols (required for Calico's IP-in-IP encapsulation):
# Create the security group
evroc networking securitygroup create k8s-internal
# Add self-referencing rule for all protocols
evroc networking securitygroup addrule k8s-internal \
--name=allow-self-traffic \
--direction=Ingress \
--security-group=k8s-internal \
--protocol=All
Create boot disks in each zone using Ubuntu 24.04:
# Create boot disk for zone a
evroc compute disk create k8s-node-a-boot \
--zone=a \
--image=ubuntu.24-04.1
# Create boot disk for zone b
evroc compute disk create k8s-node-b-boot \
--zone=b \
--image=ubuntu.24-04.1
# Create boot disk for zone c
evroc compute disk create k8s-node-c-boot \
--zone=c \
--image=ubuntu.24-04.1
Create public IPs for SSH access to each VM:
evroc networking publicip create k8s-node-a-ip
evroc networking publicip create k8s-node-b-ip
evroc networking publicip create k8s-node-c-ip
Create the 3 VMs across zones, applying all security groups:
Node in zone a (Control Plane):
evroc compute virtualmachine create k8s-node-a \
--zone=a \
--running=true \
--vm-virtual-resources-ref=a1a.m \
--disk=k8s-node-a-boot \
--boot-from=true \
--public-ip=k8s-node-a-ip \
--ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
--security-group=default-allow-ssh \
--security-group=default-allow-egress \
--security-group=default-allow-web-protocols \
--security-group=k8s-internal
Node in zone b (Worker):
evroc compute virtualmachine create k8s-node-b \
--zone=b \
--running=true \
--vm-virtual-resources-ref=a1a.m \
--disk=k8s-node-b-boot \
--boot-from=true \
--public-ip=k8s-node-b-ip \
--ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
--security-group=default-allow-ssh \
--security-group=default-allow-egress \
--security-group=default-allow-web-protocols \
--security-group=k8s-internal
Node in zone c (Worker):
evroc compute virtualmachine create k8s-node-c \
--zone=c \
--running=true \
--vm-virtual-resources-ref=a1a.m \
--disk=k8s-node-c-boot \
--boot-from=true \
--public-ip=k8s-node-c-ip \
--ssh-authorized-key="$(cat ~/.ssh/id_ed25519.pub)" \
--security-group=default-allow-ssh \
--security-group=default-allow-egress \
--security-group=default-allow-web-protocols \
--security-group=k8s-internal
Wait for VMs to be ready (this takes about 2-3 minutes):
# Check VM status
evroc compute virtualmachine list
You should see all three VMs with status Ready: True.
Get the public IPs for SSH access:
# Get the public IPs
echo "k8s-node-a: $(evroc compute virtualmachine get k8s-node-a | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"
echo "k8s-node-b: $(evroc compute virtualmachine get k8s-node-b | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"
echo "k8s-node-c: $(evroc compute virtualmachine get k8s-node-c | grep 'publicIPv4Address: [0-9]' | head -1 |
awk '{print $2}')"
Note: Save these IPs - you'll need them for the next steps.
Now install Kubernetes on all 3 VMs. SSH into each and run the installation commands. Important: You must install Kubernetes on all 3 VMs before initializing the cluster. Do not initialize the control plane until all nodes have Kubernetes installed.
On k8s-node-a (replace with the actual public IP):
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip>
Then run these commands on the VM:
# Update packages
sudo apt-get update
# Install prerequisites
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
# Install containerd
sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
# Add Kubernetes apt repository
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.35/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.35/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install kubeadm, kubelet, kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable kubelet
# Configure sysctl for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
# Load kernel modules
sudo modprobe overlay
sudo modprobe br_netfilter
NOTE! Repeat the same commands on k8s-node-b and k8s-node-c.
On k8s-node-a, get the private IP and initialize the cluster:
# Get the private IP
PRIVATE_IP=$(hostname -I | awk '{print $1}')
echo "Private IP: $PRIVATE_IP" # Should be 10.0.1.2
# Initialize the cluster (CRITICAL: Use private IP!)
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=$PRIVATE_IP \
--control-plane-endpoint=$PRIVATE_IP:6443 \
--node-name=k8s-node-a
Set up kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install Calico CNI:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# Wait for Calico to be ready
kubectl rollout status daemonset/calico-node -n kube-system --timeout=120s
Get the join command from the control plane:
# On k8s-node-a
sudo kubeadm token create --print-join-command
This outputs something like:
kubeadm join 10.0.1.2:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
On k8s-node-b:
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-b-ip>
Then run the join command (replace with your actual token):
sudo kubeadm join 10.0.1.2:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--node-name=k8s-node-b
On k8s-node-c:
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-c-ip>
Then run:
sudo kubeadm join 10.0.1.2:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash> \
--node-name=k8s-node-c
Verify all nodes are ready:
# On k8s-node-a
kubectl get nodes -o wide
You should see:
NAME STATUS ROLES VERSION INTERNAL-IP
k8s-node-a Ready control-plane v1.35.x 10.0.1.2
k8s-node-b Ready <none> v1.35.x 10.0.2.2
k8s-node-c Ready <none> v1.35.x 10.0.3.2
The CSI driver needs to know which zone each node is in:
# On k8s-node-a
kubectl label node k8s-node-a topology.kubernetes.io/zone=a
kubectl label node k8s-node-b topology.kubernetes.io/zone=b
kubectl label node k8s-node-c topology.kubernetes.io/zone=c
Verify:
kubectl get nodes --label-columns=topology.kubernetes.io/zone
NOTE: If you don't have a robot account for the CSI driver you can request this from support@evroc.com.
Critical step: The CSI driver robot account needs admin permissions on your project to create disks and attachments.
On your local machine (where evroc CLI is configured):
evroc iam permissionset create csi-robot-admin \
--admin \
--email=csi-driver-robot-name@evroc.com
Without this, the CSI driver will get "403 Forbidden - not admin" errors when trying to create volumes.
# On k8s-node-a
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip>
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Create the configuration secret. Important: The secret must have a config.yaml key with YAML content, not individual environment variables.
On your local machine:
# Create the config file
cat > /tmp/csi-config.yaml <<EOF
evroc:
organization: "<your organisation id>"
project: "csi-demo"
auth:
username: "csi-driver-robot-name@evroc.com"
password: "<your auth token>"
infrastructure:
region: "se-sto"
EOF
# Copy to control plane
scp -i ~/.ssh/id_ed25519 /tmp/csi-config.yaml \
evroc-user@<k8s-node-a-ip>:/tmp/
# SSH to control plane and create secret
ssh -i ~/.ssh/id_ed25519 evroc-user@<k8s-node-a-ip> \
"kubectl create secret generic evroc-credentials \
--namespace=kube-system \
--from-file=config.yaml=/tmp/csi-config.yaml"
# On k8s-node-a (SSH session)
helm install evroc-csi-driver \
https://github.com/evroc-oss/evroc-csi-driver/releases/download/v0.1.6/evroc-csi-driver-0.1.6.tgz \
--namespace kube-system \
--set evroc.existingConfigSecret=evroc-credentials
Verify the installation:
kubectl get pods -n kube-system -l app.kubernetes.io/name=evroc-csi-driver
You should see:
NAME READY STATUS
evroc-csi-driver-controller-... 3/3 Running
evroc-csi-driver-node-... 2/2 Running (on each node)
Also verify:
kubectl get csidriver
# Should show: disk.csi.evroc.com
kubectl get storageclass
# Should show: evroc-standard (default)
Create a test PVC and pod:
# On k8s-node-a (SSH session)
# Create a test PVC
cat > /tmp/test-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: evroc-standard
EOF
kubectl apply -f /tmp/test-pvc.yaml
# Create a test pod
cat > /tmp/test-pod.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: busybox
command: ["sh", "-c", "echo 'Hello from evroc CSI!' > /data/test.txt && sleep 3600"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: test-pvc
EOF
kubectl apply -f /tmp/test-pod.yaml
# Wait for the pod to be ready
kubectl wait --for=condition=Ready pod/test-pod --timeout=120s
# Verify the PVC is bound
kubectl get pvc test-pvc
# Read the test data
kubectl exec test-pod -- cat /data/test.txt
You should see: Hello from evroc CSI!
Verify the disk was created in evroc (from your local machine):
evroc compute disk list
You should see a disk named something like: csi-pvc-<uuid>
Run through this checklist on node A to verify everything is working:
# 1. All nodes are Ready
kubectl get nodes
# Should show 3 nodes in Ready state
# 2. All system pods are Running
kubectl get pods -n kube-system
# Should show Calico, CoreDNS, and CSI pods all Running
# 3. CSI driver is registered
kubectl get csidriver
# Should show disk.csi.evroc.com
# 4. StorageClass exists
kubectl get storageclass
# Should show evroc-standard (default)
# 5. CSI controller is working
kubectl logs -n kube-system deployment/evroc-csi-driver-controller -c evroc-csi-driver --tail=20
# Should show successful API calls (status 200), not 403 Forbidden
# 6. PVC is Bound
kubectl get pvc test-pvc
# Should show STATUS: Bound
# 7. Data persists
echo "test data" | kubectl exec -i test-pod -- tee /data/persist.txt
kubectl delete pod test-pod
kubectl apply -f /tmp/test-pod.yaml
kubectl wait --for=condition=Ready pod/test-pod --timeout=60s
kubectl exec test-pod -- cat /data/persist.txt
# Should show: test data
Here are some important design patterns to keep in mind when working with the evroc platform and CSI driver:
By default, evroc's security model requires explicit configuration for cross-zone networking. This security-first approach ensures traffic is only allowed where you specifically permit it. When deploying Kubernetes across multiple zones, please remember to create a security group with a self-referencing rule:
evroc networking securitygroup create k8s-internal
evroc networking securitygroup addrule k8s-internal \
--name=allow-self-traffic \
--direction=Ingress \
--security-group=k8s-internal \
--protocol=All
This design allows you to precisely control which resources can communicate across zones while maintaining a secure default posture.
evroc's networking is designed with a private-first architecture. For optimal performance and security, please remember to use private IP addresses when initializing your Kubernetes control plane:
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=<PRIVATE_IP> \
--control-plane-endpoint=<PRIVATE_IP>:6443
The evroc CSI driver uses a structured YAML configuration file for clean, maintainable credential management. Please remember to format your secret with a config.yaml key containing the configuration in YAML format:
stringData:
config.yaml: |
evroc:
organization: "..."
project: "csi-demo"
auth:
username: "csi-driver-robot-name@evroc.com"
password: "..."
evroc follows the principle of least privilege with explicit permission grants. Please remember to create a permission set with the appropriate administrative access for your CSI driver service account:
evroc iam permissionset create csi-robot-admin \
--admin \
--email=csi-driver-robot-name@evroc.com
When you're done, tear everything down:
# Delete test resources
kubectl delete pod test-pod --ignore-not-found
kubectl delete pvc test-pvc --ignore-not-found
# Delete VMs (run from local machine)
evroc compute virtualmachine delete k8s-node-a
evroc compute virtualmachine delete k8s-node-b
evroc compute virtualmachine delete k8s-node-c
# Wait for VMs to be deleted, then delete disks
evroc compute disk delete k8s-node-a-boot
evroc compute disk delete k8s-node-b-boot
evroc compute disk delete k8s-node-c-boot
# Delete public IPs
evroc networking publicip delete k8s-node-a-ip
evroc networking publicip delete k8s-node-b-ip
evroc networking publicip delete k8s-node-c-ip
# Delete project
evroc iam project delete csi-demo
Symptoms: Worker nodes stuck in NotReady state.
Solution: Check security groups. Ensure k8s-internal is applied to all VMs with protocol=All.
Symptoms: Controller pod restarting repeatedly.
Solution: Check logs with kubectl logs. Most likely causes:
config.yaml key)Symptoms: PVC never binds to a volume. Solution:
kubectl describe pvc for eventsSymptoms: Pod stuck in ContainerCreating. Solution: Check CSI node pod logs on the node where pod is scheduled.
You now have a production-ready 3-zone Kubernetes cluster with:
Happy Kuberneting!