Multi-Tenancy in Kubernetes: Namespace, Resource Quota and HNC
Managing a single Kubernetes cluster for a single team is relatively simple. The real challenge begins when you have to share the same cluster among 10, 20, 50 teams different, each with its own resource, security and isolation requirements. Or when you offer Kubernetes-as-a-service to your customers and you need to ensure that the A cannot see or interfere with team B's workloads, cannot consume any more resources of its budget, and cannot bypass company policies.
Kubernetes offers the namespace as a basic insulation unit, but it alone is not sufficient for real multi-tenancy. In this article we will see How to build a complete multi-tenancy system: ResourceQuota to limit resources per namespace, LimitRange to set defaults and maximums per container, NetworkPolicy to isolate traffic, e Hierarchical Namespace Controller (HNC) to manage facilities complex organizational systems with policy inheritance.
What You Will Learn
- Multi-tenancy models: namespace-per-tenant vs cluster-per-tenant
- ResourceQuota: Limits on CPU, memory, storage, and number of objects per namespace
- LimitRange: defaults and maximums for containers, prevention of Pods without limits
- NetworkPolicy for cross-namespace isolation
- RBAC for tenants: each team only sees its own namespace
- HNC (Hierarchical Namespace Controller): inheritance of policy, quota and RBAC
- Vcluster: complete virtual clusters for strong isolation
- Pattern for automated tenant onboarding
Multi-Tenancy Models
Before implementing, you need to choose the right model based on your level insulation required:
| Model | Insulation | Operational Overhead | Cost | Use Case |
|---|---|---|---|---|
| Namespace-for-team | Logical (RBAC, Share) | Bass | Minimum | Internal teams with mutual trust |
| Namespace-per-customer (soft multi-tenancy) | Logical + Network | Medium | Bass | SaaS with basic isolation |
| Virtual Cluster (vcluster) | Strong (separate server API) | High | Medium | Enterprise customers, CI/CD isolation |
| Separate cluster per tenant | Complete | Very Tall | High | Regulation, sensitive data |
ResourceQuota: Limits for Namespaces
The ResourceQuota defines the aggregate limits for all resources in a namespace. When a namespace has a quota, every resource creation request comes verified against the available quota.
Full ResourceQuota for a Team
# resource-quota-team-alpha.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-alpha-quota
namespace: team-alpha
spec:
hard:
# Risorse compute
requests.cpu: "20" # max 20 vCPU richieste nel namespace
limits.cpu: "40" # max 40 vCPU limits
requests.memory: "40Gi" # max 40 GB RAM richiesta
limits.memory: "80Gi" # max 80 GB RAM limits
# Storage
requests.storage: "500Gi" # max 500 GB storage totale
persistentvolumeclaims: "20" # max 20 PVC
# Per StorageClass specifica
standard.storageclass.storage.k8s.io/requests.storage: "200Gi"
ssd.storageclass.storage.k8s.io/requests.storage: "100Gi"
# Oggetti Kubernetes
pods: "100" # max 100 Pod
services: "20"
secrets: "50"
configmaps: "50"
replicationcontrollers: "20"
services.nodeports: "0" # nessun NodePort (usiamo Ingress)
services.loadbalancers: "2" # max 2 LoadBalancer
# GPU (se applicabile)
requests.nvidia.com/gpu: "4" # max 4 GPU
# Verifica utilizzo quota
kubectl describe resourcequota team-alpha-quota -n team-alpha
# Output:
# Name: team-alpha-quota
# Resource Used Hard
# -------- --- ---
# limits.cpu 8500m 40
# limits.memory 12Gi 80Gi
# pods 35 100
# requests.cpu 4200m 20
ResourceQuota per Class of Service
# resource-quota-by-priority.yaml
# Separa le quote per classe di priorita
# Usa PriorityClass per fare QoS
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000
globalDefault: false
description: "Workload critici, non soggetti a eviction"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-priority
value: 100
globalDefault: true
description: "Workload batch, possono essere evicted"
---
# Quota separata per workload ad alta priorita
apiVersion: v1
kind: ResourceQuota
metadata:
name: high-priority-quota
namespace: team-alpha
spec:
hard:
pods: "10"
requests.cpu: "8"
requests.memory: "16Gi"
scopeSelector:
matchExpressions:
- scopeName: PriorityClass
operator: In
values: ["high-priority"]
LimitRange: Defaults and Maximums for Containers
ResourceQuota operates at the namespace level, but if a Pod does not specify resources.requests, quota cannot calculate usage. The LimitRange solves this: it sets request and limit default for each container, and defines minimum and maximum values allowed.
# limitrange-team-alpha.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: team-alpha-limits
namespace: team-alpha
spec:
limits:
# Valori di default per container (applicati se non specificati)
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
# Massimi e minimi consentiti
max:
cpu: "4"
memory: "8Gi"
min:
cpu: "10m"
memory: "32Mi"
# Ratio max/request per prevenire burst eccessivi
maxLimitRequestRatio:
cpu: "10" # limit max 10x il request
memory: "4" # limit max 4x il request
# Per i Pod (somma di tutti i container)
- type: Pod
max:
cpu: "8"
memory: "16Gi"
# Per i PVC
- type: PersistentVolumeClaim
max:
storage: "50Gi"
min:
storage: "1Gi"
RBAC for Multi-Tenancy
Each tenant only needs to see and change its own namespace. Let's create a pattern reproducible RBAC that can be automated for each new team:
# onboarding-team-alpha.yaml
# Script di onboarding: crea namespace + quota + limitrange + RBAC
apiVersion: v1
kind: Namespace
metadata:
name: team-alpha
labels:
team: alpha
env: production
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
---
# Gruppo di utenti: tutti i developer del team alpha
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-alpha-developers
namespace: team-alpha
subjects:
- kind: Group
name: "team-alpha"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit # ClusterRole built-in: edit permette tutto tranne RBAC e quota
apiGroup: rbac.authorization.k8s.io
---
# Tech Lead: puo anche gestire i quota (ma non cluster-admin)
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-alpha-lead
namespace: team-alpha
subjects:
- kind: User
name: "alice@company.com"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin # ClusterRole built-in: admin = edit + gestione RBAC nel namespace
apiGroup: rbac.authorization.k8s.io
# Verifica che il team non possa accedere ad altri namespace
kubectl auth can-i get pods --as-group=team-alpha --as=developer -n team-beta
# No
kubectl auth can-i get pods --as-group=team-alpha --as=developer -n team-alpha
# Yes
NetworkPolicy for Cross-Namespace Isolation
# networkpolicy-tenant-isolation.yaml
# Isola completamente il namespace del tenant
# Permette solo traffico interno al namespace + DNS + monitoring
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant-isolation
namespace: team-alpha
spec:
podSelector: {} # tutti i Pod nel namespace
policyTypes:
- Ingress
- Egress
ingress:
# Traffico solo da Pod nello stesso namespace
- from:
- podSelector: {}
# Permetti dall'ingress controller (namespace ingress-nginx)
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
egress:
# Traffico solo verso Pod nello stesso namespace
- to:
- podSelector: {}
# DNS
- ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
# Monitoring: permetti scrape da namespace monitoring
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
# Accesso a servizi comuni (es. database condiviso, internal APIs)
- to:
- namespaceSelector:
matchLabels:
shared-service: "true"
ports:
- protocol: TCP
port: 5432 # PostgreSQL condiviso
- protocol: TCP
port: 6379 # Redis condiviso
Hierarchical Namespace Controller (HNC)
HNC allows you to create namespace hierarchies with inheritance of resources. And ideal for complex organizational structures: a "team-alpha" namespace with sub-namespaces "team-alpha-dev", "team-alpha-staging", "team-alpha-prod" which they automatically inherit RBAC, NetworkPolicy, ResourceQuota and LimitRange from the parent.
HNC installation
# Installa HNC con kubectl
kubectl apply -f https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/v1.1.0/default.yaml
# Oppure con Helm
helm repo add hnc https://kubernetes-sigs.github.io/hierarchical-namespaces/
helm install hnc hnc/hnc \
--namespace hnc-system \
--create-namespace \
--version 1.1.0
# Installa il plugin kubectl per HNC
curl -L https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/v1.1.0/kubectl-hns_linux_amd64 \
-o /usr/local/bin/kubectl-hns
chmod +x /usr/local/bin/kubectl-hns
# Verifica
kubectl hns version
Namespace Hierarchy for a Team
# Crea la gerarchia: team-alpha e il namespace root
# team-alpha-dev e team-alpha-prod sono subnamespace
# Crea il namespace root
kubectl create namespace team-alpha
# Crea i subnamespace (HNC li gestisce)
kubectl hns create team-alpha-dev -n team-alpha
kubectl hns create team-alpha-staging -n team-alpha
kubectl hns create team-alpha-prod -n team-alpha
# Visualizza la gerarchia
kubectl hns tree team-alpha
# Output:
# team-alpha
# ├── team-alpha-dev
# ├── team-alpha-staging
# └── team-alpha-prod
# Configura cosa viene propagato dai parent ai children
kubectl hns config set-resource networkpolicies --mode Propagate
kubectl hns config set-resource rolebindings --mode Propagate
kubectl hns config set-resource limitranges --mode Propagate
# ResourceQuota NON viene propagata (ogni sub-namespace ha la sua)
Automatic Resource Propagation
# Nel namespace parent team-alpha:
# Un RoleBinding qui viene propagato automaticamente a tutti i subnamespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-alpha-developers
namespace: team-alpha # propagato automaticamente a tutti i children
annotations:
propagate.hnc.x-k8s.io/select: "true"
subjects:
- kind: Group
name: "team-alpha-devs"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view # view in tutti i subnamespace
apiGroup: rbac.authorization.k8s.io
# RoleBinding specifico per prod: solo il tech lead
# Non si propaga ai children perche e in team-alpha-prod
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: prod-admin
namespace: team-alpha-prod
subjects:
- kind: User
name: "alice@company.com"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
# Verifica propagazione
kubectl get rolebindings -n team-alpha-dev | grep team-alpha-developers
# Il RoleBinding e apparso nel subnamespace per propagazione
Virtual Cluster with vcluster
For stronger isolation, vcluster creates full Kubernetes clusters (with its own API server, scheduler and controller manager) that run within a namespace of the host cluster. The tenant has full access to their vcluster but does not see the host cluster.
# Installa vcluster CLI
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin
# Crea un virtual cluster per il team-beta
vcluster create team-beta-cluster \
--namespace vcluster-team-beta \
--connect=false \
--helm-values vcluster-values.yaml
# vcluster-values.yaml
# vcluster:
# image: ghcr.io/loft-sh/vcluster-k8s:1.30
# sync:
# nodes:
# enabled: true
# syncAllNodes: false # sync solo i nodi dove girano i Pod del vcluster
# resources:
# limits:
# cpu: "2"
# memory: "2Gi"
# Connettiti al vcluster
vcluster connect team-beta-cluster --namespace vcluster-team-beta -- kubectl get nodes
Tenant Onboarding Automation
In clusters with many teams, manual onboarding doesn't scale. A common pattern and use
a dedicated Operator (or a simple script/GitOps) that automatically creates all the
resources needed for a new tenant starting from a CRD Tenant:
# tenant-crd.yaml - esempio con Capsule (operator per multi-tenancy)
# Capsule e un operator CNCF che automatizza la creazione di tenant
helm repo add clastix https://clastix.github.io/charts
helm install capsule clastix/capsule \
--namespace capsule-system \
--create-namespace
---
# tenant.yaml - definisci un tenant con Capsule
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: team-alpha
spec:
owners:
- name: alice@company.com
kind: User
- name: team-alpha-admins
kind: Group
namespaceOptions:
quota: 10 # max 10 namespace per questo tenant
forbiddenLabels:
denied: ["environment=production"] # il tenant non puo creare ns con certi label
resourceQuotas:
scope: Tenant # quota aggregata su tutti i namespace del tenant
items:
- hard:
requests.cpu: "50"
requests.memory: "100Gi"
requests.storage: "1Ti"
limitRanges:
items:
- limits:
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
networkPolicies:
items:
- podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {}
nodeSelector:
kubernetes.io/os: linux
# Possibile restringere il tenant a nodi specifici:
# tenant: team-alpha
Odds Monitoring
# Vedi utilizzo quota di tutti i namespace
kubectl get resourcequota -A -o custom-columns=\
"NAMESPACE:.metadata.namespace,NAME:.metadata.name,\
CPU-REQ:.status.used.requests\.cpu,CPU-LIM:.status.used.limits\.cpu,\
MEM-REQ:.status.used.requests\.memory"
# Alert Prometheus per quota vicina al limite
# Aggiungi questa regola PrometheusRule:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: namespace-quota-alerts
namespace: monitoring
spec:
groups:
- name: quota
rules:
- alert: NamespaceCPUQuotaExceeding80Percent
expr: |
kube_resourcequota{resource="requests.cpu",type="used"} /
kube_resourcequota{resource="requests.cpu",type="hard"} > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "Namespace {{ $labels.namespace }} al {{ $value | humanizePercentage }} della quota CPU"
- alert: NamespaceMemoryQuotaExceeding90Percent
expr: |
kube_resourcequota{resource="requests.memory",type="used"} /
kube_resourcequota{resource="requests.memory",type="hard"} > 0.9
for: 5m
labels:
severity: critical
annotations:
summary: "Namespace {{ $labels.namespace }} al 90% della quota memoria"
Best Practices for Multi-Tenancy
Production Multi-Tenancy Checklist
- One namespace per team environment: team-alpha-dev, team-alpha-staging, team-alpha-prod; avoid mixing environments in the same namespace
- ResourceQuota always defined: without quota, a team can consume all the cluster resources and impact other tenants
- LimitRange for defaults: prevents Pods without resource requests rendering ResourceQuotas and cluster scheduling useless
- NetworkPolicy default-deny: each namespace must have a policy that blocks all unauthorized cross-namespace traffic
- Pod Security Standards restricted: applies the restricted level to all tenant namespaces (see article 6)
- RBAC minimum privilege: developers use ClusterRole
edit, Notadminocluster-admin - Access audit: enable audit log to track who accesses what in each namespace
- Automate onboarding: use Capsule, a custom Operator, or a GitOps manifest to create all tenant objects consistently
Pitfalls of Multi-Tenancy Kubernetes
- Namespace is not complete isolation: some cluster-scoped objects (ClusterRole, PersistentVolume, Node) are visible to all tenants; use vcluster if you need complete isolation
- Shared kernel vulnerabilities: all tenants share the same Linux kernel; a container that exploits a kernel vulnerability can impact other tenants; Evaluate dedicated nodes for tenants with sensitive data
- DNS leakage: By default, Pods can resolve Service names to other namespaces (
service.namespace.svc.cluster.local); use NetworkPolicy to also block cross-namespace DNS traffic if necessary - ResourceQuota without LimitRange: if there is no LimitRange, Pods without resource requests ignore the ResourceQuota and the quota is not scaled correctly
Conclusions and Next Steps
Multi-tenancy in Kubernetes is a continuum between operational simplicity and isolation strong. For most business cases with internal teams, namespaces with ResourceQuota, Appropriate LimitRange, NetworkPolicy and RBAC provide sufficient isolation with minimal operational overhead. For enterprise customers or scenarios with compliance requirements stringent, vcluster offers a much stronger level of isolation while maintaining the benefits of a shared cluster in terms of resource efficiency.
HNC turns the management of namespace hierarchies into something scalable: instead of having to manually replicate RBAC and NetworkPolicy in each sub-namespace, are defined policies once in the parent namespace and propagate automatically. For platforms with dozens of teams, this makes the difference between a maintainable system and one that requires a dedicated team just to manage namespace configuration.
Upcoming Articles in the Kubernetes at Scale Series
Previous Articles
Related Series
- Kubernetes security — RBAC and Pod Security Standards, prerequisites for multi-tenancy
- Platform Engineering — multi-tenancy as the foundation of Internal Developer Platforms
- FinOps for Kubernetes — cost management for namespaces and tenants







