TL;DR
SOC 2 controls aren’t a separate stack — they’re the same hygiene any well-run cloud platform should already have: identity, logging, encryption, change management, vulnerability management. The work isn’t building new things, it’s wiring evidence collection into what you already run. This post maps the most common SOC 2 control families to concrete tooling on AWS, GCP, and Kubernetes — and flags where the dual-cloud reality gets messy.
Introduction
Up to now in this series we’ve covered why SOC 2 matters for ISVs and how to scope the Trust Service Criteria. Now we get to the part DevOps and platform engineers actually care about: what does an audit-ready stack look like, and how do you build it without rewriting your infrastructure?
The good news: if you’re running on AWS or GCP with a half-decent platform team, you’re probably 60-70% of the way there already. The bad news: that last 30% is where evidence collection lives, and it’s the difference between “we do this” and “we can prove it across a 6-month observation window.”

The five control families that matter most
SOC 2 has dozens of individual controls, but for an ISV running on hyperscaler cloud they cluster into five practical families. Get these right and the rest tends to fall into place.
1. Identity and access management
What auditors want: evidence that access is granted on a least-privilege basis, reviewed periodically, and revoked promptly when people leave or change roles.
The concrete controls:
- SSO with MFA enforced for all human access to production
- Role-based access — no shared accounts, no long-lived root credentials
- Quarterly access reviews with documented sign-off
- Joiner/mover/leaver process tied to HR
- Service accounts and machine identities scoped to specific workloads
Tooling map:
| Concern | AWS | GCP | Kubernetes |
|---|---|---|---|
| Human SSO | IAM Identity Center | Cloud Identity / Workspace | OIDC via SSO provider |
| Workload identity | IAM Roles for Service Accounts (IRSA) | Workload Identity Federation | ServiceAccount + RBAC |
| Privileged access | SCPs + permission boundaries | Org policies + IAM Conditions | RBAC + admission control |
| Access review evidence | IAM Access Analyzer + reports | Recommender + IAM audit logs | RBAC manifests in Git |
The single biggest unlock for most ISVs: eliminate long-lived IAM users and access keys. Move to SSO for humans and federated identity (IRSA, Workload Identity) for workloads. This single change collapses an entire category of audit findings.
2. Logging and monitoring
What auditors want: centralized, tamper-resistant logs covering authentication events, infrastructure changes, and application activity — retained for the audit observation period (minimum), reviewed regularly, and alerting on suspicious patterns.
The concrete controls:
- Cloud control plane logs enabled across all accounts/projects
- Logs shipped to a central, immutable store
- Retention aligned with your stated policy (typically 90 days hot, 1 year archived)
- Documented alerting on security-relevant events
- Periodic review evidence (someone actually looks at the logs)
Tooling map:
| Concern | AWS | GCP | Kubernetes |
|---|---|---|---|
| Control plane audit | CloudTrail (org-wide) | Cloud Audit Logs | API Server audit logs |
| Aggregation | CloudWatch Logs + S3 | Cloud Logging + GCS | Fluent Bit → central backend |
| SIEM / analysis | Security Hub, GuardDuty | SCC, Chronicle | Falco, kube-bench |
| Tamper resistance | S3 Object Lock + MFA delete | Bucket retention policies | Off-cluster log sink |
The dual-cloud reality bites here. With my medical ISV running both AWS and GCP, we landed both clouds’ audit logs into a single central store and built one set of alerting rules on top. Auditors much prefer a unified view to “here’s our AWS evidence, here’s our separate GCP evidence” — it shows the controls are actually operating as a consistent program.
3. Encryption
What auditors want: data encrypted in transit and at rest, with documented key management practices.
The concrete controls:
- TLS 1.2+ for all external traffic
- Encryption at rest enabled on all data stores (default in modern AWS/GCP services, but verify)
- Key management with rotation policies
- Customer-managed keys (CMK) where required by contract or regulation
Tooling map:
| Concern | AWS | GCP | Kubernetes |
|---|---|---|---|
| KMS | AWS KMS | Cloud KMS | External Secrets Operator + cloud KMS |
| Storage encryption | EBS, S3, RDS (default) | Persistent Disk, GCS, Cloud SQL (default) | etcd encryption at rest |
| Secrets | Secrets Manager / Parameter Store | Secret Manager | Sealed Secrets, ESO, Vault |
| TLS termination | ALB / CloudFront | Cloud Load Balancing | Ingress + cert-manager |
Modern cloud encryption is largely “on by default” — the audit gap is usually secrets handling, not data-at-rest. Hardcoded credentials in code, service-account JSON keys committed to repos, or secrets passed as environment variables in plain text are the findings I see most often. Move secrets into a managed store, rotate them, and document the process.
4. Change management
What auditors want: evidence that production changes are reviewed, tested, approved, and traceable — and that emergency changes have a documented exception path.
The concrete controls:
- All production changes go through version control
- Code review enforced (branch protection, required approvers)
- CI/CD pipelines with automated testing gates
- Deployment approvals for production
- Rollback capability and incident-driven change tracking
Tooling map:
- Source control: GitHub / GitLab / Bitbucket with branch protection and signed commits
- CI/CD: GitHub Actions, GitLab CI, Bitbucket Pipelines — with OIDC federation to cloud (no long-lived keys)
- GitOps: ArgoCD or Flux for Kubernetes — every cluster state change is a Git commit, which is its own audit trail
- IaC: Terraform / Terragrunt / Crossplane — same principle, infrastructure changes flow through PRs
GitOps is genuinely a SOC 2 superpower. When every change to production is a reviewed, approved, merged commit — and the cluster reconciles to that state automatically — you’ve turned change management from a process people have to follow into one they can’t avoid.

5. Vulnerability management
What auditors want: evidence that you find vulnerabilities (in code, dependencies, containers, and infrastructure), prioritize them, and remediate them within stated SLAs.
The concrete controls:
- SAST and dependency scanning in CI
- Container image scanning before deployment
- Infrastructure scanning (CSPM)
- Documented remediation SLAs by severity
- Annual third-party penetration testing (covered in Part 1 — you can’t pen-test yourself)
Tooling map:
| Concern | AWS | GCP | Kubernetes |
|---|---|---|---|
| CSPM | Security Hub, Inspector | SCC | kube-bench, Polaris |
| Image scanning | ECR scanning, Inspector | Artifact Registry scanning | Trivy, Grype in CI |
| Code scanning | CodeGuru, third-party SAST | Cloud Build + third-party | Same |
| Runtime | GuardDuty | SCC threat detection | Falco |
The audit question that catches teams off guard isn’t “do you scan?” — it’s “show me the ticket where vulnerability X was found, the decision on its severity, and evidence of remediation or accepted risk.” Scanning produces noise. Triage and tracking turn it into evidence.
The dual-cloud honest truth
Running on both AWS and GCP — like the medical ISV — doubles the surface area for almost every control above. Every IAM model, every logging pipeline, every encryption story has to be told twice. There are two ways to handle it:
- Unified abstractions where possible — central log store, single SIEM, federated identity, IaC modules that emit equivalent constructs in both clouds. More upfront work, much less audit pain.
- Two parallel programs — accept the duplication, document each cloud’s controls separately, and live with the evidence-collection overhead.
Most ISVs I see drift into option 2 by accident and wish they’d chosen option 1 deliberately. If you’re early enough that this is still a choice, choose unification.
Conclusion
The technical foundation for SOC 2 isn’t exotic. It’s identity hygiene, log aggregation, encryption, GitOps, and vulnerability management — done consistently and with evidence trails. Most platform teams have pieces of this; the audit forces you to finish them and prove they operate over time.
In Part 4 we’ll cover how to make all of this continuous — policy-as-code, drift detection, automated evidence collection — so SOC 2 becomes a byproduct of how you build, not a yearly fire drill.
Further Reading
- AWS SOC 2 customer responsibility matrix
- GCP compliance offerings
- CIS Kubernetes Benchmark
- Part 1: The Price of Admission
- Part 2: The Five Trust Service Criteria, Demystified
The SOC 2 technical foundation isn’t exotic — it’s the same hygiene any well-run cloud platform should have: identity, logging, encryption, change management, vulnerability management.
The work isn’t building new things. It’s wiring evidence collection into what you already run, and being able to prove your controls operated consistently over a 6-month window.
Part 3 of my SOC 2 for ISVs series is live — control-family-by-control-family mapping for AWS, GCP, and Kubernetes, with notes from a real dual-cloud medical-ISV engagement.
#SOC2 #DevOps #PlatformEngineering #AWS #GCP #Kubernetes
X (Twitter)
SOC 2 Part 3 → mapping controls to AWS, GCP, and Kubernetes.
GitOps is genuinely a SOC 2 superpower: when every prod change is a reviewed, approved Git commit, change management goes from a process people follow to one they can’t avoid.
#SOC2 #DevOps
Part 3 of the SOC 2 for ISVs series — the technical foundation. How to map SOC 2 controls to AWS, GCP, and Kubernetes, the five control families that matter most, and the dual-cloud reality.
🔧 SOC 2 Part 3 is up.
5 control families. 3 cloud surfaces. 1 honest truth: SOC 2 isn’t building new things — it’s wiring evidence collection into what you already run.
GitOps for the win. 🚀
#SOC2 #DevOps #Kubernetes #PlatformEngineering
Originally published on Medium on 2026-04-28. Cross-posted to portfolio.hagzag.com —>
Discussion