Cloud Native
Running Linkerd on OpenShift: SCCs, CNI, and the Edges That Bite
A practitioner's guide to installing Linkerd on OpenShift: why the default install fails, how Security Context Constraints change the picture, and which of the two fixes is actually worth operating.
Todea Engineering
Cloud Native Practice
Installing Linkerd on a vanilla Kubernetes cluster is a ten-minute job. Installing it on OpenShift is not. The Helm chart is the same, the control plane is the same, the proxies are the same, and yet the first helm install you run against OpenShift will leave you with a linkerd namespace full of deployments that never scale past zero:
oc get deployment -n linkerd
NAME READY UP-TO-DATE AVAILABLE AGE
linkerd-destination 0/1 0 0 2m40s
linkerd-identity 0/1 0 0 2m40s
linkerd-proxy-injector 0/1 0 0 2m40sNone of that is a bug. OpenShift's security model is doing exactly what it was designed to do. The trick is knowing which parts of Linkerd collide with it, and which of the available fixes you actually want to live with.
Why the vanilla install fails
OpenShift gates every pod through Security Context Constraints (SCCs). An SCC is a cluster-scoped policy that decides which securityContext settings a pod is allowed to use: which UIDs it can run as, which Linux capabilities it can request, which volume types it can mount, whether it can access the host network.
oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny 10 false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
hostaccess false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","hostPath","persistentVolumeClaim","projected","secret"]
hostmount-anyuid false <no value> MustRunAs RunAsAny RunAsAny RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","hostPath","nfs","persistentVolumeClaim","projected","secret"]
hostmount-anyuid-v2 false <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","hostPath","nfs","persistentVolumeClaim","projected","secret"]
hostnetwork false <no value> MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
hostnetwork-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs MustRunAs <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
insights-runtime-extractor-scc true ["CAP_SYS_ADMIN"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"]
machine-api-termination-handler false <no value> MustRunAs RunAsAny MustRunAs MustRunAs <no value> false ["downwardAPI","hostPath"]
node-exporter true <no value> RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"]
nonroot false <no value> MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
nonroot-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
privileged true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"]
privileged-genevalogging true ["*"] RunAsAny RunAsAny RunAsAny RunAsAny <no value> false ["*"]
restricted false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]
restricted-v2 false ["NET_BIND_SERVICE"] MustRunAs MustRunAsRange MustRunAs RunAsAny <no value> false ["configMap","csi","downwardAPI","emptyDir","ephemeral","persistentVolumeClaim","projected","secret"]At admission, the plugin enumerates every SCC on the cluster, sorts them by priority, and for each one checks two things: whether the pod's ServiceAccount has RBAC access to it, and whether the pod spec validates against its rules. The pod is admitted against the first SCC that passes both. If none do, it is rejected, and the event log lists every SCC it tried and why each one said no.
When the pod is rejected, the event log spells out exactly what the plugin tried: one line per SCC, with either Forbidden: not usable by user or serviceaccount (failed the RBAC check) or an Invalid value: … detail (passed RBAC, failed validation). Reading that list is how you debug.
oc get events -n linkerd
LAST SEEN TYPE REASON OBJECT MESSAGE
7m47s Warning FailedCreate replicaset/linkerd-destination-5fd5f7b7f7 Error creating: pods "linkerd-destination-5fd5f7b7f7-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .initContainers[0].runAsUser: Invalid value: 65534: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_ADMIN": capability may not be added, provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_RAW": capability may not be added, provider restricted-v2: .containers[0].runAsUser: Invalid value: 2102: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .containers[1].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .containers[2].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .containers[3].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid-v2": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "insights-runtime-extractor-scc": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount, provider "privileged-genevalogging": Forbidden: not usable by user or serviceaccount]
106s Warning FailedCreate job/linkerd-heartbeat-29446087 Error creating: pods "linkerd-heartbeat-29446087-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .containers[0].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid-v2": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "insights-runtime-extractor-scc": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount, provider "privileged-genevalogging": Forbidden: not usable by user or serviceaccount]
7m47s Warning FailedCreate replicaset/linkerd-identity-688fff88b4 Error creating: pods "linkerd-identity-688fff88b4-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .initContainers[0].runAsUser: Invalid value: 65534: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_ADMIN": capability may not be added, provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_RAW": capability may not be added, provider restricted-v2: .containers[0].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .containers[1].runAsUser: Invalid value: 2102: must be in the ranges: [1000750000, 1000759999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid-v2": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "insights-runtime-extractor-scc": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount, provider "privileged-genevalogging": Forbidden: not usable by user or serviceaccount]
7m47s Warning FailedCreate replicaset/linkerd-proxy-injector-5f654db4db Error creating: pods "linkerd-proxy-injector-5f654db4db-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .initContainers[0].runAsUser: Invalid value: 65534: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_ADMIN": capability may not be added, provider restricted-v2: .initContainers[0].capabilities.add: Invalid value: "NET_RAW": capability may not be added, provider restricted-v2: .containers[0].runAsUser: Invalid value: 2102: must be in the ranges: [1000750000, 1000759999], provider restricted-v2: .containers[1].runAsUser: Invalid value: 2103: must be in the ranges: [1000750000, 1000759999], provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid-v2": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "insights-runtime-extractor-scc": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount, provider "privileged-genevalogging": Forbidden: not usable by user or serviceaccount]Linkerd installs without any custom SCC bindings, so its ServiceAccounts inherit only the cluster-wide RBAC grant that every authenticated SA gets on OpenShift 4.11+: access to restricted-v2, and nothing else. The event log confirms this. Every other provider returns Forbidden: not usable by user or serviceaccount. Linkerd's pods are validated against restricted-v2 alone, and they fail.
As its name suggests, restricted-v2 is strict:
- No added Linux capabilities.
- Pods must run as a non-root UID within the project's assigned UID range. Each OpenShift project gets its own range via the
openshift.io/sa.scc.uid-rangenamespace annotation, and pods must fall inside it. - No host paths, no host network, no privilege escalation.
The Linkerd pods violate this in two ways. The first hits every container in the install; the second only applies to linkerd-init:
- The UIDs don't fit the project range.
restricted-v2requires every UID to fall inside the project's assigned range:[1000750000, 1000759999]in this example, a different window in every project.linkerd-initruns as65534(nobody), and the control-plane containers and proxy sidecar default to2102and2103. None of those are in range, and chasing them through Helm isn't a fix: ranges are reassigned whenever a namespace is recreated, and meshed application pods live in other projects with ranges of their own. No single UID works everywhere. linkerd-initneeds capabilitiesrestricted-v2won't grant. Its whole job is rewriting iptables so the pod's traffic is redirected through the sidecar, which takesNET_ADMINandNET_RAW. Neither is on the allow-list.
Two ways out
There are two supported paths. They solve the same problem from opposite ends, and the choice has real operational consequences.
Path 1: Linkerd CNI (recommended)
The Linkerd CNI plugin moves the iptables setup out of the pod and into a node-level CNI chain. The linkerd-cni DaemonSet drops a binary and a config into the node's CNI directory, and Multus chains it into every pod's CNI setup. The plugin is a no-op for pods that aren't meshed. The pod itself no longer needs NET_ADMIN or NET_RAW.
The CNI DaemonSet still does privileged work on the node, so its ServiceAccount needs a SCC that allows it:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: linkerd-cni-scc
allowPrivilegedContainer: true
allowPrivilegeEscalation: true
defaultAllowPrivilegeEscalation: true
allowHostNetwork: false
allowHostPorts: false
allowHostPID: false
allowHostIPC: false
allowHostDirVolumePlugin: true
volumes:
- hostPath
- configMap
- projected
- downwardAPI
- emptyDir
seccompProfiles:
- '*'
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:linkerd-cni:linkerd-cniOne more OpenShift-specific detail: the CNI paths are not the upstream Kubernetes defaults. OpenShift puts CNI binaries under /var/lib/cni/bin and configs under /etc/kubernetes/cni/net.d, and Linkerd CNI has to drop its files in the right place or Multus will never see it. Set the Helm values accordingly at install time:
helm install linkerd2-cni linkerd2-edge/linkerd2-cni \
--namespace linkerd-cni \
--set destCNIBinDir=/var/lib/cni/bin \
--set destCNINetDir=/etc/kubernetes/cni/net.d \
--set privileged=trueOnce the DaemonSet is healthy, install the control plane with --set cniEnabled=true. The linkerd-init container is then omitted from every meshed pod, and the proxy sidecar runs without elevated capabilities.
CNI shifts the privileged work from pods to the node, but it doesn't solve the UID problem for the control plane itself. Bind the Linkerd control plane ServiceAccounts to a minimal, non-privileged SCC:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: linkerd-scc
allowPrivilegedContainer: false
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
allowHostNetwork: false
allowHostPorts: false
allowHostPID: false
allowHostIPC: false
allowHostDirVolumePlugin: false
volumes:
- configMap
- projected
- downwardAPI
- emptyDir
- secret
seccompProfiles:
- '*'
runAsUser:
type: MustRunAsNonRoot
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:linkerd:linkerd-destination
- system:serviceaccount:linkerd:linkerd-identity
- system:serviceaccount:linkerd:linkerd-proxy-injector
- system:serviceaccount:linkerd:linkerd-heartbeatMeshed application workloads also carry a linkerd-proxy sidecar running as UID 2102, so their ServiceAccounts need the same grant. Add each one to the users: list, or run oc adm policy add-scc-to-user linkerd-scc -z <sa> -n <namespace>.
Path 2: Custom SCC for proxy-init
If you prefer to keep linkerd-init, deploy the following custom SCC:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: linkerd-scc
allowPrivilegedContainer: false
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
allowHostNetwork: false
allowHostPorts: false
allowHostPID: false
allowHostIPC: false
allowHostDirVolumePlugin: false
requiredDropCapabilities:
- ALL
allowedCapabilities:
- NET_ADMIN
- NET_RAW
volumes:
- configMap
- projected
- downwardAPI
- emptyDir
- secret
seccompProfiles:
- '*'
runAsUser:
type: MustRunAsNonRoot
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- system:serviceaccount:linkerd:linkerd-destination
- system:serviceaccount:linkerd:linkerd-identity
- system:serviceaccount:linkerd:linkerd-proxy-injector
- system:serviceaccount:linkerd:linkerd-heartbeatThe difference from the previous path is that this SCC grants NET_ADMIN and NET_RAW to the control plane ServiceAccounts. The grant applies only to the users listed in the SCC, so you'll need to add every new application ServiceAccount as you onboard it.
This looks contained, but its blast radius is bigger than it appears. linkerd-init runs inside every meshed pod, not just the control-plane ones, so every meshed namespace needs those capabilities available to its ServiceAccounts. In practice you end up binding this SCC (or a sibling) to system:serviceaccounts:<app-namespace> for every application namespace you onboard. That operational tax never really goes away.
One more trap: the policy controller leases
Whichever path you pick, there's one more OpenShift-specific detail that catches people out. OpenShift enables the OwnerReferencesPermissionEnforcement admission plugin by default, which isn't the case on a stock Kubernetes cluster. That plugin requires anyone setting an ownerReference on an object to also hold delete permission on it, so that garbage collection can later remove it. When the policy controller tries to claim the policy-controller-write lease in leases.coordination.k8s.io and attach an ownerRef, the call fails: the default linkerd-policy ClusterRole grants neither update nor delete on leases. Patch in the missing verbs:
oc get clusterrole linkerd-policy -o json \
| jq '(.rules[] | select(.apiGroups==["coordination.k8s.io"] and .resources==["leases"]) | .verbs) |= (. + ["update","delete"] | unique)' \
| oc apply -f -When the xtables modules aren't available
On some OpenShift clusters, the RHCOS kernel doesn't autoload every xtables compatibility module that linkerd-init relies on. When that happens, linkerd-init fails to insert its iptables rules and the control-plane components end up stuck in Init:CrashLoopBackOff.
oc logs -n linkerd deploy/linkerd-destination -c linkerd-init --previous
time="2025-12-27T04:59:58Z" level=info msg="/usr/sbin/iptables-nft-save -t nat"
time="2025-12-27T04:59:58Z" level=info msg="# Generated by iptables-nft-save v1.8.11 (nf_tables) on Sat Dec 27 04:59:58 2025\n*nat\n:PREROUTING ACCEPT [0:0]\n:INPUT ACCEPT [0:0]\n:OUTPUT ACCEPT [0:0]\n:POSTROUTING ACCEPT [0:0]\n:PROXY_INIT_REDIRECT - [0:0]\nCOMMIT\n# Completed on Sat Dec 27 04:59:58 2025\n"
time="2025-12-27T04:59:58Z" level=info msg="/usr/sbin/iptables-nft -t nat -F PROXY_INIT_REDIRECT"
time="2025-12-27T04:59:58Z" level=info msg="/usr/sbin/iptables-nft -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4190,4191,4567,4568 -j RETURN -m comment --comment proxy-init/ignore-port-4190,4191,4567,4568"
time="2025-12-27T04:59:58Z" level=info msg="Warning: Extension multiport revision 0 not supported, missing kernel module?\niptables v1.8.11 (nf_tables): RULE_APPEND failed (No such file or directory): rule in chain PROXY_INIT_REDIRECT\n"
Error: exit status 4iptables-nft isn't a pure nftables translator: matchers like -m multiport, -m owner, and -m comment, and the REDIRECT target go through nft_compat, which needs the legacy xt_* modules to be loadable. If any of those modules isn't available, rule insertion fails. To make sure they're loaded at boot, use this MachineConfig:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: load-linkerd-xt-modules
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/modules-load.d/linkerd-xt.conf
mode: 0644
overwrite: true
contents:
source: data:,xt_multiport%0Axt_comment%0Axt_REDIRECT%0Axt_ownerThe data:, URL is the four module names separated by URL-encoded newlines, the format systemd-modules-load expects. Applying this drains and reboots every node in the worker MachineConfigPool.
Which one to pick
The real choice is about where the privilege lives and how many SCCs you have to maintain.
The CNI path concentrates the privilege in one place: a single DaemonSet in a dedicated linkerd-cni project, governed by one linkerd-cni-scc bound to one ServiceAccount. The control plane gets a minimal, non-privileged linkerd-scc. Application workloads pick up the same minimal SCC for their proxy sidecars, but nothing in the cluster ever needs NET_ADMIN or NET_RAW inside an application pod.
The proxy-init path spreads the privilege out: every meshed application namespace needs its ServiceAccounts bound to an SCC that grants NET_ADMIN and NET_RAW. You are operating that grant forever, for every new app team that onboards.
Pick CNI unless your cluster administrators forbid DaemonSets that write to host CNI paths. In that case, proxy-init with a custom SCC is the honest fallback. Just build the per-namespace SCC grant into your namespace-provisioning automation from day one, or it will bite you six months in.