TL;DR
FIPS 140-3 isn’t a library you import — it’s a boundary you draw. This second post in the series unpacks what FIPS 140-3 means at the codebase level: which cryptographic libraries are actually validated, how the runtime self-tests work, and why the same FIPS-mode binary behaves differently inside a container, a Lambda, or a long-running monolith. If you’re an architect choosing a base image or an engineer trying to understand why your CI suddenly fails when GODEBUG=fips140=on, this is for you.
The September 2026 deadline for FIPS 140-2 transition makes this an urgent reading.
Recap & Where We Are
In Part 1 we made the business case: FIPS and FedRAMP are architectural decisions, not security-team checkboxes, and SOC 2 is the natural stepping stone for ISVs entering the federal market. We promised to go deeper into the engineering reality from here. This post starts there — at the cryptographic module level — and works outward to packaging.
If you take one thing from this post, take this: FIPS 140-3 compliance is determined by where you draw your cryptographic boundary, not by which library you call. Every architectural decision flows from that.
The Hard Deadline Nobody’s Talking About Loudly Enough
Before we go further: September 21, 2026. That’s the date all FIPS 140-2 certificates move to Historical status. After that date, federal procurement explicitly requires FIPS 140-3 validated modules — not “in the validation queue,” not “submitted to CMVP,” but actually validated.

This deadline is roughly five months away as I write this. If your stack today depends on a FIPS 140-2 certified library — and most do, because 140-3 validations are still catching up — you have a window to migrate, retest, and rebuild your CI/CD around the new module. That’s the urgency context for everything below.
The Cryptographic Boundary — The Architectural Decision
A cryptographic module in FIPS terminology is the set of hardware, software, or firmware that implements approved security functions and lives within a defined cryptographic boundary. The boundary is what’s been tested and validated by a NIST-accredited lab and listed on the CMVP active certificate list.
In practice, your application is rarely a cryptographic module. Your application uses one — typically OpenSSL’s FIPS provider, BoringCrypto, the Go Cryptographic Module, Bouncy Castle FIPS, or a kernel-level module shipped by your OS vendor. Your job as an architect is to decide:
- What crypto library will every service in our system use?
- Where does the boundary sit — in the container, in the OS, or in the application binary?
- What happens when a service violates the boundary by reaching for a non-validated algorithm?
Get those three answered up front and FIPS becomes tractable. Get them wrong and you’ll be retrofitting for years.

What “FIPS Mode” Actually Does at Runtime
FIPS 140-3 introduces stricter runtime behavior than 140-2, and this is where engineering teams hit unexpected walls. When a process starts in FIPS mode:
- An integrity self-check runs at init time, comparing the checksum of the module’s object file computed at build time against the symbols loaded in memory. Mismatch = abort.
- Known-answer self-tests run for each algorithm, either at init or on first use, depending on the module’s CMVP security policy.
- TLS, signature, and key exchange algorithms are constrained to the FIPS-approved subset. Negotiating TLS 1.0 or an unapproved cipher? The handshake fails.
- Random number generation is sourced from a NIST SP 800-90A DRBG, with the platform’s CSPRNG mixed in as additional entropy.
For Go specifically — and this matters because Go is the lingua franca of cloud-native — Go 1.24 introduced native FIPS 140-3 support. The Go Cryptographic Module v1.0.0 ships with Go 1.24, and FIPS mode is enabled with GODEBUG=fips140=on either as an environment variable or in go.mod. When enabled, the standard library transparently uses the validated module for crypto/tls, crypto/rand, crypto/ecdsa, and friends — no cgo, no FFI overhead, no separate binary.
There’s also a stricter GODEBUG=fips140=only setting that panics or returns an error if any non-FIPS-approved algorithm is invoked. This is the mode I recommend running CI tests against — better to fail loudly in CI than discover a non-compliant code path during 3PAO assessment.
The Library Landscape — Language by Language
ISVs in the wild rarely run a single-language stack. The five languages I see most often in the cloud-native federal-adjacent space are Go, Rust, Python, Node.js, and Java. Pick your language below to see the practical state of FIPS 140-3 — what’s validated, how to enable it, and the gotcha that bites teams.
Module: Go Cryptographic Module v1.0.0, shipped natively with Go 1.24 (released February 2025). Pure Go and Go assembly — no cgo, no FFI. Awarded CAVP certificate A6650, submitted to CMVP, and progressing through validation.
How to enable:
# As an environment variable
GODEBUG=fips140=on go run ./cmd/server
# Or in go.mod
godebug fips140=on
# Strict mode — panic on any non-FIPS algorithm use
GODEBUG=fips140=only go test ./...What changes when on: crypto/tls only negotiates FIPS-approved TLS versions and cipher suites. crypto/rand.Reader uses an SP 800-90A DRBG. RSA, ECDSA, ECDH, AES, SHA-2/3 all route through the validated module. Init-time integrity self-check runs automatically.
Gotcha: GODEBUG=fips140=on and =only are not supported on OpenBSD, Wasm, AIX, or 32-bit Windows. If you cross-compile for those targets, FIPS mode is unavailable. Also: the deprecated GOEXPERIMENT=boringcrypto path is incompatible with native FIPS mode — you must migrate, not run both.
My recommendation: for greenfield Go services targeting federal markets, Go 1.24+ native FIPS mode is now the cleanest path. Run GODEBUG=fips140=only in CI to catch boundary violations early.
Module: Python inherits FIPS compliance from the OpenSSL it’s linked against — the “FIPS Inside” pattern. There’s no Python-specific validated module. Build CPython against the OpenSSL FIPS provider (or against AWS-LC), run on a host with OpenSSL FIPS mode enabled, and the standard library’s hashlib, ssl, and secrets modules inherit compliance.
How to enable:
- On a FIPS-enabled host (RHEL/UBI booted with
fips=1, or a Photon/Wolfi-based image with FIPS provider configured), Python’ssslandhashlibautomatically use the validated module. - Verify at runtime:
from cryptography.hazmat.backends.openssl.backend import backend
print(backend._fips_enabled) # True if FIPS mode is activeGotcha: This is the language with the most footguns. The cryptography package routes through OpenSSL — fine. But pure-Python crypto reimplementations in third-party packages (and there are many) silently bypass the validated module. PyJWT, pycryptodome, passlib with non-FIPS algorithms, hand-rolled HMAC implementations — all of these can land you in a non-compliant state without any warning. SBOM scanning that flags non-OpenSSL crypto imports is the only reliable defense.
My recommendation: standardize on cryptography (the package) for everything. Lint or policy-gate against pycryptodome, M2Crypto, and pure-Python crypto. Run on a FIPS-enabled base image so the OpenSSL boundary is established at the OS level, not inside the venv.
Module: Node.js, like Python, inherits FIPS compliance from its underlying OpenSSL. Node 18+ supports OpenSSL 3.x FIPS provider. The crypto module routes through OpenSSL.
How to enable:
# Build Node.js with FIPS support, or use a FIPS-enabled base image
node --enable-fips ./server.js
# Or force-enable for the process
node --force-fips ./server.js
# Verify at runtime
node -e "console.log(crypto.getFips())" # 1 if enabledWhat changes when on: non-approved algorithms throw immediately on use. TLS 1.0/1.1 are unavailable. crypto.createHash('md5') throws.
Gotcha: the JavaScript ecosystem has its own version of Python’s pure-implementation problem. bcrypt, argon2, hand-rolled JWT signing — many widely used npm packages ship their own crypto (often in WebAssembly or native bindings) that bypass the validated module. npm audit won’t flag this; you need explicit allowlisting. Also: a lot of Node observability and APM agents do their own TLS termination — verify those agents respect FIPS mode or you’ll have data egressing through a non-compliant TLS path.
My recommendation: treat the Node.js dependency tree as suspect. Use a FIPS-enabled base image, enable --force-fips, and run a “kill switch” test in CI that confirms crypto.getFips() === 1 at process start.
Module: AWS-LC-FIPS 3.x, accessed through the aws-lc-rs crate. AWS-LC itself received its FIPS 140-3 Level 1 certificate in 2024, and AWS-LC-FIPS 3.0 was the first cryptographic library to include ML-KEM (post-quantum key encapsulation) in its FIPS 140-3 validation.
How to enable: opt into the fips feature in your Cargo.toml:
[dependencies]
aws-lc-rs = { version = "1", features = ["fips"] }For TLS specifically, rustls integrates with aws-lc-rs and exposes a FIPS provider:
[dependencies]
rustls = { version = "0.23", features = ["fips"] }rustls::crypto::default_fips_provider()
.install_default()
.expect("default provider already set elsewhere");The Rustls FIPS support is currently covered by FIPS 140-3 certificate #4816 — verify the security policy matches your operating environment.
Gotcha: aws-lc-fips-sys builds require CMake, Go (yes, Go — for AWS-LC’s build tooling), and potentially bindgen depending on the target platform. Your CI base image needs all three. Pin to a specific aws-lc-rs version to avoid auto-upgrading to a newer FIPS module that hasn’t been validated yet for your operating environment.
My recommendation: Rust + rustls + aws-lc-rs (FIPS feature) is now a viable, performant federal-market stack. The ring-compatible API means migration from non-FIPS Rust services is largely mechanical.
Module: Bouncy Castle FIPS (BC-FJA), distributed as bc-fips.jar. Currently certified for FIPS 140-3 in version 2.x. An alternative is the Amazon Corretto Crypto Provider (ACCP), which uses AWS-LC underneath.
How to enable: register Bouncy Castle FIPS as the highest-priority security provider in your JVM’s java.security file:
security.provider.1=org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider C:DEFRND[HMACSHA512(256)];HYBRID;ENABLE{ALL};
security.provider.2=org.bouncycastle.jsse.provider.BouncyCastleJsseProvider fips:BCFIPS
security.provider.3=com.sun.net.ssl.internal.ssl.Provider BCFIPSThen bump the existing providers’ priorities down (SUN becomes security.provider.4, SunRsaSign becomes security.provider.5, etc.).
What changes when on: standard JCE calls (MessageDigest.getInstance("SHA-256"), Cipher.getInstance(...), KeyGenerator, SSLContext) route through Bouncy Castle FIPS. Non-approved algorithms throw NoSuchAlgorithmException.
Gotcha: if your application bundles its own crypto libraries (and Java apps often do — Apache Shiro, Tink, custom JCE providers), those calls don’t go through Bouncy Castle FIPS. Audit the classpath. Also: Spring Security and similar frameworks sometimes default to non-FIPS algorithms; you’ll need explicit configuration to override defaults like password hashing schemes.
My recommendation: Bouncy Castle FIPS for the JVM ecosystem is the established, well-documented path. ACCP is a strong alternative if you’re already AWS-aligned and want better performance. Either way, lock down the security providers list and CI-gate any change to java.security.
One Cross-Cutting Pattern
Across all five languages, the same pattern emerges: your code calls the standard crypto API of your language; the module underneath is what’s validated. Your job is to ensure that module is the validated one — and that nothing in your dependency tree reaches around it.
The “reaches around it” failure mode is the most expensive one to discover late. SBOM scanning, allowlist policies on transitive dependencies, and CI that fails on non-FIPS algorithm use are the controls that catch it before audit time. We’ll wire those concretely in Part 5.
Bonus: Kernel-Level and OS-Level Crypto
Worth a brief mention for completeness. The Linux kernel ships its own crypto API which is FIPS-validated on kernels booted with the fips=1 parameter. This matters for IPsec, dm-crypt/LUKS, and kernel TLS offload. Most application-layer workloads don’t rely on it directly, but if you’re building infrastructure software (network appliances, storage products), the kernel boundary becomes part of your story. Document it explicitly in your System Security Plan.
How Packaging Changes the Game
This is where the conversation usually goes sideways with non-platform engineers, so let’s spell it out. The same FIPS-validated library behaves very differently depending on how you package it.
Containers
Containers are the most common deployment shape, and they’re where most teams start. The cryptographic boundary in a container typically lives in the base image — specifically in the OpenSSL FIPS provider or the kernel-level crypto module that your image links against. This means:
- Base image selection becomes a compliance decision. Hardened, FIPS-enabled base images (whether commercial or community-built like Wolfi-derived images) bring you a validated module out of the box. A vanilla
alpine:latestdoes not. - Every layer is in scope. A
RUN apt-get install some-packagethat pulls in a non-FIPS-validated crypto library can void your boundary. Your SBOM has to account for every binary that performs crypto operations. - Multi-stage builds need careful audit. A build-stage image that uses non-validated crypto is fine; a runtime-stage image that does is not.
- Image hash + module hash must be reproducible. If you can’t prove the binary you’re running today is byte-identical to the one you tested for FIPS compliance, your assessor will not be happy.
Serverless (Lambda, Cloud Functions, Container Apps)
This is the painful one. FIPS 140-3 mandates runtime and conditional self-tests — not just startup tests like 140-2. For ephemeral, scale-to-zero workloads, that means every cold start re-runs the self-tests. The implications:
- Cold-start latency increases. The integrity check and known-answer self-tests add measurable overhead. For latency-sensitive workloads, this can change your scaling and concurrency tuning.
- Lambda layers as the boundary. Packaging your FIPS-validated crypto module as a Lambda layer (or its equivalent in Cloud Functions / Container Apps) is the cleanest pattern. The layer is versioned, content-addressed, and reusable across functions.
- Runtime selection matters. Managed Lambda runtimes are validated by AWS for FedRAMP — but only the specific runtime versions in scope. Drift onto an unsupported runtime version and you’re outside the boundary.
- Custom runtimes carry full burden. If you ship a custom runtime, you own the FIPS validation story end to end.
Monoliths
Counterintuitively, monoliths are often the easiest to make FIPS-compliant — and the hardest to keep that way.
- Single boundary, single audit. One binary, one process, one cryptographic module. The boundary is clear.
- But: every dependency is in scope. A monolith with hundreds of transitive dependencies has hundreds of opportunities for a library to bypass the validated module — calling its own bundled crypto, embedding test keys, using a non-approved hash. This is where SAST and SBOM scanning earn their keep.
- Patching is brittle. A point release that touches the cryptographic module — even indirectly — can void validation. You’re choosing between security patches and compliance status, and that’s a terrible position.
Microservices
The good news: each microservice has a smaller compliance surface. The bad news: you have many of them.
- Standardize the crypto stack across services. This is non-negotiable. If service A uses OpenSSL FIPS provider and service B uses BoringCrypto, you’ve doubled your audit work and your risk.
- Internal artifact registry as a boundary control. Pin the specific FIPS-validated library version as an internal artifact; have CI fail any service whose lockfile points elsewhere.
- Service mesh complications. mTLS terminated by a sidecar (Envoy, Linkerd) means the sidecar’s crypto is in scope too. Validate the sidecar’s FIPS posture or terminate TLS in the application.

Compliance Drift — The Silent Killer
Here’s a failure mode I’ve seen multiple times: a team gets to FIPS 140-3 validation, ships, and then six months later a routine dependency update bumps OpenSSL from 3.0.x to 3.1.x. The module hash changes. The boundary the assessor signed off on no longer matches what’s running in production.
Technically, the team is no longer compliant. Practically, nobody noticed for nine months.
This is compliance drift, and it’s the most common cause of failed re-authorization. Three controls that catch it:
- Reproducible builds. The same source must produce the same binary, byte for byte. Without this, you can’t even prove what you tested.
- Continuous SBOM scanning. Every build emits a Software Bill of Materials; CI compares it against the SBOM that was validated. Drift fails the build.
- Pinned, versioned base images. No
:latesttags anywhere. Ever. Image tags are content hashes.
Part 5 will show concrete CI/CD wiring for these. For now, internalize that drift detection is a first-class engineering concern, not an audit-time concern.
What This Means For Each Role
Before closing, the role breakdown I promised in Part 1.
Architects own the boundary decision. Which library, which packaging, which runtime, which base image. This is not delegable. Make the call early; document the trade-offs; revisit only when the library or framework forces it.
Senior engineers own the toolchain. CI configuration, SBOM generation, base image lifecycle, the policy gates that fail builds when boundaries are violated. They translate the architect’s decision into automated enforcement.
Junior engineers own the daily hygiene. Knowing which algorithms are on the approved list (AES, SHA-2/3, RSA, ECDSA with approved curves, HKDF), which are not (MD5, SHA-1 for signatures, RC4, DES), and how to spot a crypto.createHash('md5') in a code review before it reaches main.
The most expensive mistake I’ve seen is treating FIPS as something the security team will handle “after the engineering work is done.” By then, the boundary has been violated in dozens of places and the cleanup is more expensive than the original architecture work would have been.
What’s Next
Part 3 will walk through the authorization process itself — gap analysis, 3PAO selection, the SOC 2 → FedRAMP upgrade path, and the FedRAMP 20x acceleration. We’ll spend less time on theory and more on the sequence of decisions and deliverables.
Part 5 is where the hands-on lab lives — concrete Dockerfile, GitHub Actions workflow, and FIPS-mode verification for Go, Python, and Node.js services. If you’re impatient for code, that’s the one to wait for.
Conclusion
FIPS 140-3 is sometimes described as “just use a validated library.” That description is technically correct and operationally misleading. The library is the easy part. The boundary, the packaging, and the drift controls are where engineering effort actually goes — and where most teams discover, mid-audit, that they had a different mental model than their assessor.
Draw the boundary on day one. Pick the library to match the boundary, not the other way around. Standardize across services. Build drift detection into CI before you need it. Do that and FIPS 140-3 becomes infrastructure work — annoying but tractable. Skip it and FIPS becomes a moving target you’ll never quite catch.
See you in Part 3.
Social Snippets
FIPS 140-3 isn’t a library you import — it’s a boundary you draw.
Part 2 of the FIPS & FedRAMP series is up. This one goes deep into the codebase: a per-language treatment of the validated cryptographic modules across Go, Rust, Python, Node.js, and Java — what’s certified, how to enable FIPS mode, and the gotcha that bites every team. Then we look at how packaging shape (containers, serverless, monoliths, microservices) shifts the cryptographic boundary in fundamentally different ways.
The September 21, 2026 deadline for FIPS 140-2 transition is roughly five months out. If your team hasn’t mapped its current crypto modules to validated 140-3 alternatives yet, this post is the prompt.
#FedRAMP #FIPS1403 #PlatformEngineering #CloudNative #Compliance #Golang #Rust #Java
X (Twitter)
Part 2 of the FIPS & FedRAMP series is live.
The cryptographic boundary is an architectural decision, not a library choice. Containers, serverless, monoliths, and microservices each shift it in different ways.
September 2026 deadline for FIPS 140-2 → 140-3 is closer than it looks.
#FIPS1403 #FedRAMP #DevOps
Part 2 of my FIPS/FedRAMP blog series is published. This one is for engineers and architects — what FIPS 140-3 actually does at runtime, which crypto libraries are validated across major language ecosystems, and how container vs. serverless vs. monolith packaging changes the compliance picture entirely.
Part 2 is live 🛡️
What FIPS 140-3 actually means for your code: → The cryptographic boundary is an architecture decision → Per-language deep dive: Go, Rust, Python, Node.js, Java → Containers, serverless, monoliths, microservices each play differently → September 2026 deadline is real
#FIPS #FedRAMP #DevOps #PlatformEngineering #CloudNative
Originally published on Medium on 2026-05-03. Cross-posted to portfolio.hagzag.com
Discussion