TLDR; Compliance regimes like FIPS and FedRAMP have pulled supply chain security from “nice to have” into “audit item.” This post is a practitioner’s map of the terms you will keep hearing — SBOM, provenance, SLSA, cosign — and a short, honest explanation of how FIPS 140–2/3 and FedRAMP actually land on your container images. Part 2 walks through what we did about it with Wolfi an OpenSource distro-less OS.

The Trigger

An enterprise client we work with needs FIPS and FedRAMP alignment. That single sentence triggered a full review of every container image we ship on their behalf — base images, builder images, runtime images, sidecars, everything. Once you start pulling on that thread, you quickly realize that the compliance conversation is really a supply chain conversation in disguise.

How do we verify the content of an image … ?!

Before we can talk about tools and base images (Part 2), we need a shared vocabulary. So let’s build one.

SBOM — The Ingredients List

A Software Bill of Materials is exactly what it sounds like: a machine-readable inventory of every package, library, and dependency inside your artifact. Two formats dominate: SPDX (Linux Foundation) and CycloneDX (OWASP). Pick one and stick with it.

The reason SBOMs matter is boring and powerful: you cannot patch what you cannot inventory. When the next Log4Shell-class CVE drops, the org with SBOMs answers “are we exposed?” in minutes. The org without them answers it in days.

Three open-source tools do most of the heavy lifting here:

  • Syft (Anchore) — generates SBOMs from images, filesystems, or source trees. Outputs SPDX or CycloneDX.
  • Grype (Anchore) — scans SBOMs or images directly against vulnerability databases.
  • Trivy (Aqua) — all-in-one scanner with SBOM generation, vulnerability detection, misconfiguration checks, and secret scanning.

The Syft + Grype pairing is particularly useful: generate the SBOM once at build time, persist it as an artifact, then re-scan it as the vulnerability database updates. You catch newly disclosed CVEs without re-pulling the image.

What makes an SBOM — and when it was made

Not every SBOM is worth the YAML it’s written in. Where it was produced matters as much as the fact that it exists.

A build-time SBOM is emitted by the builder itself — apko, ko, Buildpacks, Bazel, BuildKit with attestations, or your CI pipeline. It sees the inputs the build actually consumed: the source commit, the resolved dependency graph, the toolchain versions, the base image digest. Because it is produced next to the artifact, it can be signed and attached as an in-toto attestation in the same step. That is the SBOM SLSA L3+ and FedRAMP actually want you to ship.

What makes an SBOM

An after-the-fact SBOM — what you get when you point Syft or Trivy at a finished image — is reconstructed evidence. The scanner parses apk/dpkg/rpm databases, reads go.mod, package.json, requirements.txt, and fingerprints binaries it recognizes. Useful, often necessary, but inherently lossy:

  • Build-only dependencies disappear — stripped, compiled-in, or left behind in a multi-stage build.
  • Statically linked and vendored code is easy to miss. A Go binary with vendored modules often shows up as one opaque blob.
  • Provenance is absent. The scanner sees openssl 3.2.1; it cannot tell you whether it came from your trusted base image or was side-loaded later.
  • Integrity is assumed, not proven. Nothing binds the SBOM to the build event.

Rule of thumb: generate at build time, verify after the fact. The pipeline produces the authoritative SBOM and SLSA provenance as signed attestations attached to the image. Registry and runtime scanners then become a verification layer — they confirm the shipped artifact still matches the attested bill and catch drift. Treating a scanner-generated SBOM as the source of truth is the common compliance trap: you end up with the “SB” but not the “M” — a reconstruction of what a tool could see later, not a signed record of what the builder intended to include.

A build-time SBOM is only as trustworthy as the build that produced it. Which is exactly what provenance is for.

Provenance — The Receipt

Provenance is the verifiable record of how an artifact was built: which source commit, which builder, which steps, which inputs. If SBOM tells you what is in the box, provenance tells you where the box came from and who sealed it.

provenance | who did what when, this of it as the diary of events arround a package / image

In modern toolchains, provenance is emitted as an in-toto attestation, signed, and stored alongside the artifact. GitHub Actions, GitLab, and most CI systems now produce SLSA-compatible provenance with minimal configuration. The point is not the format — the point is that you can, at deploy time, verify that a given image was built by the pipeline you expect, from the commit you expect, with the steps you expect.

Cosign & Sigstore — Signing the Box

Cosign is the signing tool; Sigstore is the ecosystem around it. Together they made artifact signing approachable for the first time.

The shift that matters is keyless signing: instead of managing long-lived signing keys (and the HSMs, rotation policies, and audit logs that come with them), cosign can sign using short-lived certificates issued against an OIDC identity — your GitHub Actions workflow, your CI service account, your developer’s SSO session. The signature and the identity end up in a transparency log (Rekor) that anyone can audit.

Validateion at build and runtime

At deploy time, admission controllers like Kyverno, OPA (Open Policy Agent, typically via Gatekeeper), or Connaisseur verify signatures before the image runs. No signature, no pod. That is the enforcement loop that closes everything SBOM and provenance open up.

SLSA — The Framework That Ties It Together

SLSA (Supply-chain Levels for Software Artifacts) is a graduated framework. It does not invent new tools; it describes how SBOMs, provenance, and signing combine into defensible build integrity.

  • L1 — documented build process, basic provenance
  • L2 — signed provenance from a hosted build service
  • L3 — hardened, non-forgeable provenance; isolated build environments
  • L4 — two-party review, hermetic and reproducible builds

Most production-serious organizations target L3. L4 exists, but very few shops operate there today.

slsa.dev | Supply-chain Levels for Software Artifacts

The mental model I find useful: SLSA is about the build, SBOM is about the contents, cosign is about the signature. They are complementary layers, not competing choices.

FIPS 140–2 vs 140–3 — The Short Version

FIPS 140 is the NIST standard for cryptographic modules. Two versions matter right now:

  • FIPS 140–2 — dominant for roughly two decades. NIST stopped accepting new module validations for 140–2 in September 2021, and existing validations are being sunset over a multi-year window.
  • FIPS 140–3 — the current standard, aligned with ISO/IEC 19790. Stricter requirements around side-channel resistance, non-invasive attacks, and lifecycle documentation.

FIPS | Federal Information Processing Standards

For container workloads the practical impact is the same in both cases: your crypto libraries — OpenSSL, BoringSSL, Go’s crypto stack — must be FIPS-validated, not just “FIPS-capable.” Most upstream base images ship non-validated crypto by default. That is the gotcha that sends teams looking for a new base image.

FedRAMP — Where It All Compounds

FedRAMP is the US federal authorization program for cloud services, with three impact baselines: Low, Moderate, and High. Most commercial SaaS targeting federal customers lands on Moderate.

captionless image

For container workloads specifically, a few NIST 800–53 control families do the heavy lifting:

  • SI (System and Information Integrity) — vulnerability management with remediation SLAs
  • CM (Configuration Management) — authoritative image inventories and drift detection
  • SA (System and Services Acquisition) — supply chain integrity controls

The part that surprises teams: continuous monitoring requires monthly vulnerability scans with documented remediation. FIPS-validated crypto is a hard requirement at Moderate and above, not a recommendation. And every CVE in your base image is your CVE — the auditor does not distinguish between “upstream shipped it” and “we built it.”

How to Tackle It — The Short Playbook

Once the picture above clicks, the playbook writes itself:

  1. Shift CVE management left — pick minimal base images and rebuild frequently, not quarterly.
  2. Make SBOM and provenance part of the build, not an afterthought.
  3. Sign everything and verify signatures at deploy time.
  4. Choose a base OS that was designed for this posture, not retrofitted into it.

That last point is where Wolfi OS enters the story.

Coming in Part 2

In the follow-up we will dig into Wolfi OS, how apko and melange produce images with built-in SBOMs and provenance. See you on the other side on Part 2 — Rebuilding on Wolfi OS

Further Reading