For Kubernetes
Kubernetes on Edge,
without the platform tax
k3s, RKE2 or vanilla k8s on real VMs across regions. Persistent volumes on local NVMe, backups in S3-compatible storage, global CDN as your ingress front-end. No control-plane fee.
# Bootstrap a k3s control plane VM
$ edge compute create \
--image ubuntu-24-04 --plan medium \
--script ./bootstrap-k3s-server.sh
# Add worker nodes (any region)
$ edge compute create --plan small \
--region eu-west --count 3 \
--script ./join-k3s-worker.sh
# Front the ingress with the CDN
$ edge cdn create *.cluster.example.com \
--origin https://<ingress-vip>
✓ Cluster live, CDN routing
Why teams run Kubernetes on Edge
Real VMs, real networking, real storage — all the primitives Kubernetes assumes, none of the managed-service markup.
Your distribution, your call
k3s for lightweight clusters, RKE2 for hardened production, kubeadm for vanilla — all run cleanly on Edge VMs. No proprietary control plane to negotiate with.
Multi-VM clusters on private networks
Spread control plane and worker nodes across Edge VMs and regions. Private networking keeps the etcd / API traffic off the public internet and off your egress bill.
Persistent volumes that persist
Local NVMe for stateful pods that need IOPS, plus the S3-compatible CSI driver pointing at Edge Storage for backups, snapshots and shared blob data.
CDN as the cluster ingress
Front your ingress controller (Nginx, Traefik, Istio) with the Edge CDN. Global delivery, zero egress fees on the way out, automatic SSL handled at the edge.
No control-plane fees
EKS, GKE and AKS all charge ~$73/month per cluster just to exist — before any nodes. On Edge, the control plane is just another VM you already paid for.
GitOps-ready
Run Flux or ArgoCD against your cluster, deploy from a repo, audit every change. The Edge CLI stands up the cluster; your GitOps pipeline takes it from there.
Reference architecture
How a cluster maps to Edge
Control plane on one (or three) VMs, worker nodes wherever you need capacity, storage in object form for backups and shared data, and the CDN handling the world.
VMs for control plane and worker nodes (sized independently)
S3-compatible object store for backups, Velero snapshots, shared blob data
Sits in front of your ingress controller for global delivery
On-the-fly image transforms for any pod-served media
Anycast DNS for `*.cluster.example.com` (pair with ExternalDNS)
Indicative cost
A modest production cluster
3 control-plane + 3 worker nodes, 200GB persistent storage, ~2TB monthly egress
Indicative figures. Egress alone often dwarfs the control-plane fee on hyperscalers.
Common questions
Which distribution should I use?
For most teams, k3s. It's a single binary, runs in <512MB RAM per node, and supports almost all of vanilla Kubernetes. Use RKE2 if you need FIPS / CIS hardening, kubeadm if you have specific reasons to pin to upstream k8s.
Do I really need Kubernetes?
Probably not — if you're running fewer than ~20 containers, our Docker stack page (compose on a VM) will save you a lot of complexity. Kubernetes earns its keep with multi-region, fault tolerance, autoscaling, or large container counts.
How does this compare to EKS / GKE / AKS?
Cheaper (no control-plane fee, no per-LB-hour bill, zero egress), more portable (vanilla k8s, no cloud-specific CRDs), and you keep root on every node. Trade-off: you handle upgrades — or our Expert Services team can manage them.
Persistent volumes — local or S3?
Local NVMe for databases and anything IOPS-sensitive. S3-compatible CSI (e.g. `s3-csi`, `geesefs`) for shared assets and bulk data. Velero on top of Edge Storage handles cluster-wide backup and disaster recovery.
By Stack
Other stacks on Edge
Run Kubernetes the cheap way
30-day trial. Stand up a cluster in minutes — or have our Expert Services team architect and run it for you.