By 2026, almost every production workload of consequence ships as a container and runs on Kubernetes, inheriting a security posture determined less by what the application code does than by how the image was built, signed, admitted, and observed at runtime. Container security is the discipline of treating that full lifecycle — base image, Dockerfile, registry, admission, runtime, network — as one continuous attack surface rather than five disconnected ones. This guide walks it from a developer's perspective: how the build inherits CVEs from base images, why image scanning alone never catches the runtime compromise, what Pod Security Standards enforce, how Sigstore changes the trust model for registries, and the operational tax that distroless images and signed-only policies impose on the teams that adopt them.
The Container Security Stack — Five Surfaces, Five Sets of Controls

The container security model that works in 2026 is layered, and the layers are not interchangeable. A program that scans images at build time but admits any image at the cluster boundary has the same effective posture as one that does neither — the build-time scan is advisory, and the cluster runs whatever it is told to. A program that runs runtime monitoring but never signs images cannot tell whether the suspicious binary in a pod was the artifact the team built or one substituted along the registry-to-node path. The five surfaces — build, registry, admission, runtime, observation — each have distinct controls that the others cannot substitute for.
Build is where the image is composed. Controls: Dockerfile hygiene (USER, multi-stage, no ADD-from-URL), image scanning (Trivy, Grype, Snyk Container, Docker Scout) against assembled layers, and SBOM generation as a build side effect. This is the layer with the most overlap with software composition analysis — both find CVEs in language-package dependencies; container scanners also see OS packages, configuration, and embedded secrets.
Registry is where the image lives between build and deployment. Controls: signing (Cosign), attestation (in-toto, SLSA provenance), retention policy, and access control. The registry is where supply-chain integrity becomes verifiable — a signed image pulled by a node and verified against the signing key produces a trust chain that "we trust Docker Hub" cannot.
Admission is the last gate before the image enters the cluster. Controls: admission webhooks (OPA Gatekeeper, Kyverno), Pod Security Standards (privileged, baseline, restricted), and image policy enforcement (Sigstore policy-controller, Connaisseur). An admission rule that rejects unsigned images is the single highest-leverage control in the stack — it converts every other layer's verification from advisory finding into precondition.
Runtime is where the container actually executes. Controls: syscall monitoring (Falco, Tetragon, Tracee), eBPF observability, and network policy (Cilium, Calico). Runtime catches what scanning cannot: a reverse shell spawned from a process never in the image, a crypto miner downloaded after start, lateral movement to a service the workload should not be talking to.
Observation is where logs, metrics, and traces aggregate into something an incident responder can query. Controls: audit logs, telemetry pipelines, and the workflow that turns alerts into investigations. Most often skipped — teams enable Falco, never tune rules, and discover during a real incident that alerts have been firing into a Slack channel no one watches.
Each surface is exploited by different attacker patterns. The Tesla 2018 cryptojacking incident — attackers found an internet-exposed Kubernetes dashboard with no authentication and deployed crypto miners — was a failure at admission and observation, not build or registry. The recurring "exposed Docker daemon API" findings on Shodan are pure runtime exposure. A robust program addresses all five.
Image Composition — What Goes Into a Container
The single decision that does the most to determine an image's security posture is the choice of base image. Every layer above the base inherits its CVEs, OS package versions, shells, utilities, and attack surface. ubuntu:22.04 commits to roughly 100 OS packages and a full bash environment; alpine:3.19 commits to about 15 packages with musl-libc quirks; a distroless image inherits roughly nothing — no shell, no package manager, no debugging utilities — and pays for that minimalism in operational complexity.
The "dependency for the dependency" tax is what makes the choice consequential. An ubuntu:22.04 image running a Python service pulls in the Ubuntu base, the Python interpreter, the C extensions Python needs, the OS libraries those extensions link against, the pip dependencies, and the C extensions those depend on. The CVE inventory is the union of all of them. A scanner reporting 200 vulnerabilities on a trivial Python service is reporting the truth — most are in OS packages the application never exercises, but they are present in the image bytes and exploitable if a different code path reaches them.
The four base-image strategies that matter in 2026 each make different tradeoffs:
Full distribution (ubuntu, debian). Familiar tooling, broad compatibility, large CVE inventory. The right choice when the workload genuinely needs many OS utilities; wrong as a default because it ships attack surface the application never uses.
Alpine. About 5MB base, musl libc, BusyBox, apk package manager. The default minimal-base choice for the last decade. Tradeoffs: occasional musl-compatibility surprises in compiled extensions, smaller advisory ecosystem, and a still-present shell that an attacker who reaches the container can use.
Distroless (Google distroless, Chainguard). No shell, no package manager, no busybox. Just the application binary, language runtime if needed, and minimum shared libraries. Sizes range from 2MB (static) to 50MB (Java). CVE inventory drops by an order of magnitude. Debugging requires ephemeral debug containers or a separate -debug image variant.
Scratch. The empty image, suitable only for static binaries needing no runtime. Minimum possible attack surface. Tradeoffs: no SSL bundle (HTTPS fails unless ca-certificates is copied in), no /etc/passwd (USER must be numeric), no tzdata.
The trend across mature 2026 programs is unambiguous: distroless or scratch as the production default, alpine or a full distribution only with explicit reason. The Chainguard ecosystem has made distroless dramatically more practical with broad language coverage, continuous CVE remediation, and attached SLSA provenance — solving the operational problems that kept Google's original distroless project from wider adoption.
Dockerfile Best Practices That Actually Matter
Most Dockerfile guidance is about image size. The security-relevant guidance is a smaller set of decisions that determine whether a compromised application becomes a compromised host.
Here is a deliberately bad Dockerfile that demonstrates the common mistakes:
# Bad: every line is a problem
FROM node:latest
ADD https://example.com/setup.sh /tmp/setup.sh
RUN sh /tmp/setup.sh
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]The problems: node:latest is unpinned and changes every rebuild — yesterday's working build may produce a different image today, and the security posture is whatever the latest tag points at this minute. ADD https://... downloads from the network at build time with no integrity verification. There is no USER directive, so the container runs as root; an application RCE becomes root-in-container, and any container-escape becomes root-on-host. COPY . . with no .dockerignore ships .env files, .git, node_modules, and IDE configuration into the image. The single-stage build leaves npm, build tools, and devDependencies in the production runtime.
The corresponding fixed Dockerfile:
# Good: pinned, multi-stage, non-root, distroless final
FROM node:20.11.0-alpine@sha256:f2dc6eea95f787e25f173ba9904c9d0647ab2506178c7b5b7c5a3d02bc4af145 AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production
COPY src/ ./src/
COPY tsconfig.json ./
RUN npm run build
# Final stage: distroless, no shell, no package manager
FROM gcr.io/distroless/nodejs20-debian12@sha256:8a35e8e4b1...
WORKDIR /app
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY --from=build /app/package.json ./
USER 10001:10001
EXPOSE 3000
CMD ["dist/server.js"]The differences: the base is pinned to a version and a digest, so rebuilds produce byte-identical layers and a poisoned tag cannot substitute a different image. npm ci uses the lockfile, --only=production drops devDependencies. Multi-stage separates build from runtime — the final image has no npm, no compiler, no source. The final stage is distroless, removing the shell and package manager an attacker needs after RCE. USER 10001:10001 runs as a numeric non-root UID, limiting blast radius if the application is compromised.
A .dockerignore at the root is the other half. Without it, COPY . . ships everything in the build context:
.git
.gitignore
.env
.env.*
node_modules
npm-debug.log
.DS_Store
.vscode
.idea
*.md
Dockerfile
.dockerignore
coverage
.nyc_output
dist
buildADD has two extra behaviors almost never desirable in production: URL fetch (no integrity check) and auto-extract of tar archives. Use COPY always, fall back to ADD only when the auto-extract surprise is intended.
Layer caching has its own security dimension. Secrets written into a layer remain in layer history even if a later RUN deletes them. The fixes are BuildKit's --mount=type=secret, which mounts the secret only during the RUN, or multi-stage builds where the secret-using stage's filesystem is discarded.
Image Scanning at Build Time
Image scanning is the container-security layer with the most tooling, the most overlap with SCA, and the most temptation to treat as the entire program. A scanner decomposes an image into OS and language packages and matches each against vulnerability databases. Modern scanners also report misconfigurations (running as root, exposed ports, embedded secrets in image layers).
Trivy (Aqua Security, open source) is the de facto open-source standard — fast, broadly maintained, covers OS and language packages, IaC, and Kubernetes manifests. Grype (Anchore, open source) pairs with Syft for SBOM-driven scanning — generate the SBOM at build, scan it many times without re-decomposing. Snyk Container (commercial) brings strong remediation guidance and IDE integration; pricing scales with team size. Anchore Enterprise is the commercial tier of Syft/Grype, strong on compliance reporting. Docker Scout is Docker's first-party scanner, convenient for Docker-Hub-centered teams, less compelling elsewhere.
A typical Trivy invocation against a built image:
$ trivy image --severity HIGH,CRITICAL --exit-code 1 my-app:1.2.3
2026-04-25T10:00:00Z INFO Vulnerability scanning is enabled
2026-04-25T10:00:01Z INFO Detected OS: alpine
2026-04-25T10:00:01Z INFO Detecting Alpine vulnerabilities...
2026-04-25T10:00:02Z INFO Number of language-specific files: 2
my-app:1.2.3 (alpine 3.19.1)
==============================
Total: 4 (HIGH: 3, CRITICAL: 1)
┌──────────┬────────────────┬──────────┬───────────────┬───────────────┬─────────────────────────┐
│ Library │ Vulnerability │ Severity │ Installed Ver │ Fixed Version │ Title │
├──────────┼────────────────┼──────────┼───────────────┼───────────────┼─────────────────────────┤
│ openssl │ CVE-2024-XXXXX │ CRITICAL │ 3.1.4-r5 │ 3.1.4-r6 │ openssl: heap buffer │
│ │ │ │ │ │ overflow in X.509 parse │
├──────────┼────────────────┼──────────┼───────────────┼───────────────┼─────────────────────────┤
│ busybox │ CVE-2024-YYYYY │ HIGH │ 1.36.1-r15 │ 1.36.1-r16 │ busybox: integer │
│ │ │ │ │ │ overflow in awk │
└──────────┴────────────────┴──────────┴───────────────┴───────────────┴─────────────────────────┘
app/node_modules (npm)
==============================
Total: 2 (HIGH: 2)
[...]The false-positive crisis from SCA scanning applies equally here. A typical alpine-based Node.js image produces 30-80 findings on first scan; Ubuntu-based, 200-400 is common. Most are in OS packages the application never exercises. Container scanning has even worse reachability tooling than language-package SCA — OS-package call graphs are not amenable to the same static analysis. Failing the build on every finding produces revolt; failing on a narrow policy (critical-severity-with-fix, secrets in layers, untrusted base images) produces signal. Most container findings are misconfigurations rather than novel vulnerabilities — the running-as-root finding is more actionable than the openssl CVE in a code path the application never reaches.
The Distroless Movement
The strongest single hardening move available in 2026 is switching from a general-purpose base image to a distroless one. CVE inventory drops by an order of magnitude. Post-RCE escalation surface drops to nearly nothing. Image size drops by 50-90%. The operational tax — "we cannot kubectl exec and use bash to debug" — is real but manageable.
The category started with Google's distroless project (gcr.io/distroless), images containing language runtimes and minimal C libraries with nothing else. Google's original cadence was slow, which kept adoption limited. The Chainguard ecosystem extended the model with broader language coverage, faster CVE remediation, and attached SLSA provenance, and is what most teams mean by "distroless" in 2026. Wolfi OS — also Chainguard — is the underlying minimal distribution most current distroless images are built on, designed specifically for container use.
The "no shell" property is what makes distroless meaningful as a security control. RCE in a non-distroless container hands the attacker bash, curl, wget, ssh, and the rest of the standard toolkit — lateral movement, exfiltration, and persistence become trivial. RCE in a distroless container leaves only the application binary and language runtime — no shell to spawn, no curl to call out with. The compromise is real but constrained.
The operational tax is the genuine downside. kubectl exec -it pod -- bash stops working. Investigations move to ephemeral debug containers (kubectl debug, Kubernetes 1.25+), sidecar debug containers in non-production, or a separate -debug image variant. Teams that adopt distroless without addressing the debug workflow bounce back to alpine within a sprint; teams that invest in the ephemeral-debug tooling stay distroless.
Registry Security and Image Signing
Once built, the image travels through a registry to reach runtime. The registry is the choke point where supply-chain integrity can be verified. The dominant pattern in 2026 is signed images verified at admission, anchored by Sigstore's Cosign.
Docker Content Trust (Notary v1) was the original signing system; awkward UX and hard key management kept adoption poor and it was effectively retired. Sigstore (Linux Foundation, 2021) replaced it with short-lived signing keys tied to OIDC identities, a public transparency log (Rekor), and a verification model that checks both signature and log entry. Cosign is the operational standard in 2026.
A typical Cosign sign-and-verify pair, using OIDC keyless signing (so there is no key file to manage):
# Sign at build time, in CI, using the build's OIDC identity
$ cosign sign --yes \
ghcr.io/myorg/my-app@sha256:8a35e8e4b1...
Generating ephemeral keys...
Retrieving signed certificate from Fulcio...
Signing artifact...
tlog entry created with index: 142378942
Pushing signature to: ghcr.io/myorg/my-app
# Verify at admission, in cluster, against signing identity
$ cosign verify \
--certificate-identity-regexp 'https://github.com/myorg/.*' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com' \
ghcr.io/myorg/my-app@sha256:8a35e8e4b1...
Verification for ghcr.io/myorg/my-app@sha256:8a35e8e4b1... --
The following checks were performed:
- The cosign claims were validated
- The signatures were verified against the specified public key
- The transparency log entry was verified
- Existence of the claims in the transparency log was verified offlineThe keyless flow eliminates the key-management problem that killed Content Trust. The signing identity is the build workload's OIDC identity (GitHub Actions, GitLab CI, Buildkite), so there is no long-lived signing key to rotate or compromise. Verification policy can enforce "images must be signed by our own GitHub Actions builds," not just "images must be signed."
The in-toto attestation framework extends signing to attach statements about the image — SBOM, scan results, provenance ("built from this commit by this CI workflow at this time") — that downstream verification can require. SLSA (Supply-chain Levels for Software Artifacts) defines the hierarchy: SLSA 1 requires basic provenance, SLSA 4 requires hermetic reproducible builds with two-party review. The 2026 baseline for production supply chains is SLSA 3 with Sigstore signing.
This integrity story connects directly to OWASP A08 — Software and Data Integrity Failures. Image signing is integrity-at-deploy-time made operational. "We trust Docker Hub" is not a security control — the registry has hosted credential-stuffing victims and typosquatted images. Signature verification at admission is what converts trust-by-assertion into trust-by-cryptography.
Admission Control — The Last Gate Before Cluster
The Kubernetes admission controller is where build-time and registry-time controls become enforceable rather than advisory. An admission webhook intercepts every API request to create or modify a workload and admits or rejects it against policy. The two engines that dominate are OPA Gatekeeper (Rego, portable beyond Kubernetes) and Kyverno (Kubernetes-native YAML). Both are CNCF-graduated; the choice is mostly stylistic.
The Pod Security Standards — privileged, baseline, restricted — are the Kubernetes-native baseline admission should enforce at minimum. They succeeded the deprecated PodSecurityPolicy (removed in 1.25) and run via the built-in PodSecurity admission controller with no external webhook required.
Privileged is unrestricted — running as root, mounting the host filesystem, host PID namespace, HostPath volumes — and is the default if no policy is enforced. Baseline blocks the worst (no privileged containers, no host networking, no HostPath outside an allowlist, only approved capabilities) and is the minimum any production cluster should enforce. Restricted is the hardened level: runAsNonRoot required, all capabilities dropped, seccomp set to RuntimeDefault, read-only root filesystem with explicit emptyDir for writable paths. Restricted is what production workloads should target; most legacy workloads cannot run at restricted without modification.
A pod manifest that violates restricted:
apiVersion: v1
kind: Pod
metadata:
name: bad-pod
spec:
containers:
- name: app
image: my-app:latest
securityContext:
privileged: true # blocked at baseline
runAsUser: 0 # blocked at restricted
allowPrivilegeEscalation: true # blocked at restricted
capabilities:
add: ["SYS_ADMIN"] # blocked at baseline
volumeMounts:
- name: host
mountPath: /host
volumes:
- name: host
hostPath:
path: / # blocked at baseline
type: DirectoryThe corresponding restricted-compliant version:
apiVersion: v1
kind: Pod
metadata:
name: good-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: ghcr.io/myorg/my-app@sha256:8a35e8e4b1...
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}The image is pinned to a digest, the container runs as a numeric non-root UID, all capabilities are dropped, privilege escalation is blocked, the root filesystem is read-only, seccomp is set, and writable paths are explicit emptyDir volumes. The gap between this and what most teams have in their Helm charts today is what makes the migration to restricted Pod Security a multi-quarter project, not a one-sprint task.
Beyond Pod Security Standards, mature clusters enforce additional admission policies via Sigstore policy-controller, Kyverno, or Gatekeeper: images must come from approved registries, must be signed by approved identities, must carry fresh scan attestations, and resource requests must be set. Each closes an attack vector the build-time controls alone cannot.
Runtime Threat Detection
Even with hardened images, signed registries, and strict admission, runtime detection is necessary because compromises happen in code paths static analysis cannot see. An RCE in a dependency runs the attacker's payload inside a container that was built clean. A deserialization bug spawns a process the application binary never spawns under normal load. A configuration error exposes a service admission did not block.
The dominant open-source tool is Falco, originally from Sysdig, now CNCF-graduated. Falco taps kernel events (kernel module originally, eBPF now) and matches them against rules — "alert when a shell spawns inside a container," "alert when an unexpected outbound connection is made from a database pod." The rules library ships hundreds of pre-built detections; tuning is required to suppress false positives.
A Falco rule that detects an unexpected shell spawn in a production container:
- rule: Unexpected shell in production container
desc: A shell was spawned inside a container that should not have one
condition: >
spawned_process and
container and
shell_procs and
not container.image.repository in (allowed_debug_images) and
k8s.ns.name in (production_namespaces) and
not proc.pname in (entrypoint_procs)
output: >
Shell spawned in production container
(user=%user.name user_loginuid=%user.loginuid
container_id=%container.id container_name=%container.name
image=%container.image.repository:%container.image.tag
shell=%proc.name parent=%proc.pname
cmdline=%proc.cmdline)
priority: WARNING
tags: [container, shell, mitre_execution]The rule fires when a shell starts inside a container not on the debug allowlist, in a production namespace, with a parent process other than the legitimate entrypoint. In a distroless deployment, this rule effectively never fires under normal operation — there is no shell to spawn — so when it does fire, it is a high-signal indicator that something has executed a shell binary downloaded post-startup or staged through a multi-step exploit.
Tetragon, from Isovalent (the Cilium team), is the eBPF-native alternative that has gained traction since 2023. Tetragon can enforce policy in addition to detecting — a policy can kill a shell-spawn process before it runs. Commercial offerings — Aqua, Sysdig, Wiz, Lacework — extend the pattern with managed detection content, ML-based anomaly detection, and lateral-movement tracking. What runtime catches that scanning misses includes reverse shells from RCE, crypto miners downloaded post-startup, DNS/HTTP exfiltration, lateral movement to unintended services, container escape attempts, and service-account-token theft. None are detectable by image scanning because they all happen after the container starts.
The Tesla 2018 cryptojacking incident is the canonical case study. Attackers exploited an unauthenticated Kubernetes dashboard, deployed mining pods, and ran them in production until researchers reported it. A baseline Falco rule for "unexpected outbound traffic to mining pools" or "unexpected high CPU in a workload that does not normally use CPU" would have surfaced the operation in minutes. Image scanning caught nothing — the mining pods were legitimate cryptominer images, scanning correctly, doing exactly what they were configured to do.
Network Policy and Service Mesh Security
The default Kubernetes networking model is permissive: every pod can reach every other pod. In a flat network, an RCE in any pod can reach every database, every internal API, every secrets store. The first network-policy commitment a serious cluster makes is default-deny — a NetworkPolicy denying all traffic, with subsequent policies opening specific paths. This is the single highest-leverage network control after admission-time image policy.
This connects directly to SSRF (OWASP A10). An SSRF in a workload that can reach every other pod is an SSRF that can pivot to internal services. The Capital One 2019 breach is the canonical demonstration — an SSRF in a metadata-endpoint-reachable workload extended into IAM credential theft and S3 data exfiltration. Blast radius was determined by what the workload could reach from its network position.
Default-deny in Kubernetes:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressThis denies all ingress and egress in the production namespace. Subsequent policies open specific paths — pod-A reaches pod-B on 5432, pod-B calls api.stripe.com on 443. Verbosity is the point: every allowed path is documented, every new path requires explicit policy. Cilium and Calico extend basic NetworkPolicy with L7 awareness (HTTP path rules, gRPC method rules) and DNS-based egress.
Service meshes — Istio, Linkerd, Cilium Service Mesh — add mutual TLS between every pod-to-pod connection. Every connection is authenticated to a workload identity (typically SPIFFE bound to the service account), encrypted, and authorizable per-call. A pod that compromises one workload identity cannot impersonate another's calls. The tradeoff is operational: a sidecar (or Cilium ambient eBPF proxy) per pod, 5-15% CPU/memory overhead, and a new operational surface to maintain. The benefit is that pod-to-pod authentication becomes a platform property and SSRF blast radius becomes structurally bounded.
The Mitigation Playbook
The container security program that works in 2026 is layered, owned, and enforced. The components are:
Build pipeline gates. Image scanning on every build with a narrow blocking policy (critical-severity-with-fix, secrets-in-layers, untrusted-base-image). SBOM generation as a CycloneDX side effect. Multi-stage builds, pinned base by digest, USER non-root numeric UID, committed .dockerignore.
Signed images only. Cosign signing in CI via OIDC keyless. Sigstore policy-controller (or Kyverno) at admission rejecting any image not signed by an approved identity. SLSA Level 3 attestations attached to every production image.
Distroless or minimal base. Distroless production images by default, alpine when distroless is impractical, full distribution only with explicit reason. Investment in ephemeral debug containers and -debug variants for non-production.
Pod Security Standards: restricted. Production namespaces labeled with pod-security.kubernetes.io/enforce: restricted. Workloads needing elevated privileges isolated in dedicated namespaces with explicit exceptions.
Runtime monitoring. Falco or Tetragon cluster-wide with tuned rules. Alerts routed to a security telemetry pipeline with an actual on-call, not a Slack channel no one watches.
Network segmentation. Default-deny NetworkPolicy in every production namespace. Service mesh with mTLS for production-tier workloads. Egress controlled and logged.
Training. Developers who can read a Trivy report and prioritize correctly, who write Dockerfiles with USER and multi-stage by default, who know what Pod Security Standards block and why. The DevSecOps function typically owns the platform side; developer enablement is what makes the controls feel like guardrails rather than walls.
Image Scanning Catches CVEs. Developers Decide What to Ship.
A container security program that ends at the build-time scan misses every runtime compromise that ever happens. The hard part of the discipline is the integration — Dockerfile hygiene that does not slow the team down, signing policies that admission can actually enforce, distroless adoption that does not break the debug workflow, runtime detection that produces signal rather than alert fatigue. SecureCodingHub's platform builds the container-aware judgment that turns these controls from compliance burdens into operational signal: developers who write multi-stage Dockerfiles by default, who know why USER 10001 matters, who can read a Falco alert and a Pod Security violation without senior-engineer hand-holding. If your team is migrating to distroless, rolling out signed-only admission, or trying to operationalize Pod Security Standards across legacy workloads, we'd be glad to walk you through how our hands-on labs change that adoption curve.
See the PlatformClosing: Container Security Is the Whole Lifecycle, Not the Scan
The mistake the largest number of container security programs make is treating the discipline as image scanning plus a sprinkling of best practices. Container security is not the build-time scan. It is a lifecycle discipline covering what goes into the image, how it is signed, what is allowed to run in the cluster, what the running container is doing, and what the running container can reach. The Tesla cryptojacking incident did not happen because of a CVE in a base image. The Capital One blast radius did not happen because of an unsigned image. The recurring exposed-Docker-daemon findings do not happen because of a Dockerfile mistake. Each is a failure at one of the layers above, and each is the kind of incident a layered program prevents while a scan-and-hope program does not.
None of these practices is exotic in 2026. The institutional commitment to apply them consistently — across every image, every cluster, every team — is what separates programs that produce signal from programs that produce dashboards. Most container findings are misconfigurations rather than novel vulnerabilities, most production compromises are post-runtime rather than pre-build, and most of the leverage in the program is at admission and runtime, not at the scanner. Knowing what is in your image, who signed it, what the cluster will let it do, what it is doing right now, and what it can reach over the network — that is the working definition of container security in 2026, and the practice the rest of cloud-native application security increasingly depends on.