IaC & GitOps with EKS blueprints

IaC & GitOps with EKS blueprints

Table of Contents

Originally posted on the Israeli Tech Radar on medium.

TLDR; Need a cluster up and running fast? Take a close look at eks-blueprints, I got started in minutes and have been working with it for almost 2 years now.

As I travel between customers I see many implementations, and when I look left and right it seems like everyone is writing the same code to set up their cluster. As a consultant, I keep getting the “why do I need it” question and my answer is quite simple … reuse, or stop taking ownership of practices which are supposed to be simple enough …

To be honest, after looking at the eks-blueprints I asked myself how it differs from the terraform-aws-modules-eks, and in a way it isn’t. It actually wraps around the standard eks module but adds sugar and spice on top …

15 f
From worried to Frustrated!

As someone who worked closely with the standard eks module, it seems like somewhere at version 18.30.x the module made a change in how it configures cluster access (aws-auth configmap …) and it seemed like we couldn’t recover from that which kinda broke my entire belief in my IaC code.

A little search and I found “someone” has solved that issue + added much more than I expected to see …

So what’s all the fuss about EKS blueprints?

Amazon Elastic Kubernetes Service (EKS) Blueprints provide a set of pre-defined configurations and templates for common Kubernetes-based workloads, including web applications, databases, and machine learning workloads.

These blueprints can help organizations quickly get up and running with Kubernetes on EKS, without having to spend a lot of time on configuration and management tasks.

15 f
The Full blown eks-blueprints solution

In my first encounter, I found that I can install all the controllers I needed, something that I used to do in ArgoCD via GitOps … and I found this:

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons"

      eks_cluster_id = <EKS-CLUSTER-ID>
    
      # EKS Addons
    
      enable_amazon_eks_aws_ebs_csi_driver  = true
      enable_amazon_eks_coredns             = true
      enable_amazon_eks_kube_proxy          = true
      enable_amazon_eks_vpc_cni             = true
    
      #K8s Add-ons
      enable_argocd                        = true
      enable_aws_for_fluentbit             = true
      enable_aws_load_balancer_controller  = true
      enable_cluster_autoscaler            = true
      enable_metrics_server                = true
    }

There is a very long list of add-ons -> https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/modules/kubernetes-addons.

What I didn’t understand is the need to run terraform every time I have a change in one of the controllers … e.g cluster_autoscaler or karpenter or anything else which isn’t the ArgoCD itself (chicken and egg …).

15 f
ArgoCD

For those of you who aren’t familiar with ArgoCD (shame on you !), ArgoCD is a popular GitOps tool that allows for the declarative management of Kubernetes resources using Git as the single source of truth. With ArgoCD, organizations can automate the deployment and management of their Kubernetes resources, reducing the risk of configuration drift and human error.

When used together, EKS Blueprints and ArgoCD can provide organizations with a powerful combination of tools to manage their Kubernetes infrastructure. Here are some ways in which EKS Blueprints can integrate well with ArgoCD and GitOps best practices:

Standardization: EKS Blueprints provide a standardized way of deploying common Kubernetes workloads. By using EKS Blueprints as a starting point, organizations can ensure that their Kubernetes clusters are configured consistently across environments. ArgoCD can then be used to manage the deployment and management of these resources, ensuring that they remain consistent over time something that usually takes a few months to accomplish.

Infrastructure as code: Both EKS Blueprints and ArgoCD rely on infrastructure as code (IaC) to manage Kubernetes resources. With IaC, organizations can define their Kubernetes infrastructure and configurations as code, which can be version-controlled and managed using Git. This allows for a more scalable and automated approach to managing Kubernetes resources.

module "eks_blueprints_kubernetes_addons" {
  source = "../../modules/kubernetes-addons"

      eks_cluster_id       = module.eks.cluster_name
      eks_cluster_endpoint = module.eks.cluster_endpoint
      eks_oidc_provider    = module.eks.oidc_provider
      eks_cluster_version  = module.eks.cluster_version
    
      # Wait on the `kube-system` profile before provisioning addons
      data_plane_wait_arn = join(",", [for prof in module.eks.fargate_profiles : prof.fargate_profile_arn])
    
      enable_karpenter = true
      karpenter_helm_config = {
        repository_username = data.aws_ecrpublic_authorization_token.token.user_name
        repository_password = data.aws_ecrpublic_authorization_token.token.password
      }
      karpenter_node_iam_instance_profile        = module.karpenter.instance_profile_name
      karpenter_enable_spot_termination_handling = true
      karpenter_sqs_queue_arn                    = module.karpenter.queue_arn
    
      tags = local.tags
}

**Standardization — **almost every controller needs IRSA (Iam Role for Service Accounts) which is something covered by the eks-blueprints for what I call “the common controllers”

  • argoCD
  • external-dns
  • external-secrets
  • aws-loadbalncer-controller
  • cert-manager

What’s even more awesome … you can opt out of installing the helm chart and adding a small snippet to your code which delegates the installation to ArgoCD so basically for each addon you specify manage_via_gitops you are actually delegating the installation to argoCD …

      enable_external_dns                    = true
    
      external_dns_helm_config = { 
        manage_via_gitops = true
      } 

Similar to the same example which I mentioned earlier, manage_via_gitops = true which means now the helm chart for external-dns will not be installed but the IRSA will be created …, in order to complete the picture can now add the argocd_applictions var, and were good to go:

locals {
  environment = var.env
}
    
    argocd_applications = {
      test-app-http = {
      # test alb + ingress nginx
      path               = ".test/app/${local.environment}/http"
      repo_url           = "[email protected]:hagzag/apps.git"
      add_on_application = false
      type               = "kustomize"
      target_revision    = "HEAD"
      create_namespace   = false
      }

Continuous deployment — ArgoCD can be used to automate the deployment of Kubernetes resources, including those defined by EKS Blueprints. In the example above installing karpentertook under an hour to install and perhaps a couple of days to stabilize … This allows organizations to implement a continuous deployment approach to Kubernetes, where changes to infrastructure and applications are automatically deployed to production environments. This can help reduce the time it takes to deploy changes, while also reducing the risk of human error.

Auditing and compliance: EKS Blueprints can help organizations meet auditing and compliance requirements by providing pre-defined configurations that meet common security and compliance standards. ArgoCD can be used to manage and audit these configurations over time, providing an audit trail of changes made to Kubernetes resources.

15 f
From worried to Frustrated!

A great example of standardizing the teams you may take a look at https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.24.0/teams/ which seems to have created a standard of managing developer access to the cluster e.g.

    platform_teams = {
        admin-team-name-example = {
          users = [
            "arn:aws:iam::123456789012:user/admin-user",
            "arn:aws:iam::123456789012:role/org-admin-role"
          ]
        }
      }

In the use case above the admin team enables a certain role to assume a cluster admin role … and for users you can create a custom role

    application_teams = {
        # First Team
        team-a = {
          "labels" = {
            "appName"     = "team-a",
            "projectName" = "foo",
            "environment" = "dev",
            "domain"      = "example.com",
            "uuid"        = "example-zzz",
          }
          "quota" = {
            "requests.cpu"    = "1000m",
            "requests.memory" = "4Gi",
            "limits.cpu"      = "2000m",
            "limits.memory"   = "8Gi",
            "pods"            = "10",
            "secrets"         = "10",
            "services"        = "10"
          }
          manifests_dir = "./manifests"
          # Belows are examples of IAM users and roles
          users = [
            "arn:aws:iam::123456789012:user/team-a-user",
            "arn:aws:iam::123456789012:role/team-a-sso-iam-role"
          ]
        }
    ...
    }

So the above also solves a lot of trial and error on how your RBAC management should be done per team, which in many cases is quite a headache to handle.

In addition to the eks-blueprints git repo itself, you will find one worth mentioning this one -> https://github.com/aws-samples/eks-blueprints-add-ons.git which has a helm chart of all the add-ons (I forked it and used it in our internal GitLab).

So if you are new to IaC / terraform I can suggest you spend an hour or so reviewing the getting started https://aws-ia.github.io/terraform-aws-eks-blueprints/v4.24.0/getting-started/ the first time I used it, it took me under an hour to have a cluster up and running + a node group and a set of controllers which I needed to be installed on the cluster …

15 f
Manage everything on Kubernetes via GitOps!

Overall, the combination of EKS Blueprints and ArgoCD can provide organizations with a powerful set of tools to manage their Kubernetes infrastructure. By using EKS Blueprints as a starting point and ArgoCD to manage the deployment and management of Kubernetes resources, organizations can implement a more scalable, automated, and secure approach to Kubernetes management.

comments powered by Disqus

Related Posts

Using S3 as Local Storage on kubernetes with S3 CSI Driver

Using S3 as Local Storage on kubernetes with S3 CSI Driver

Originally posted on the Israeli Tech Radar on medium.

TLDR; In the world of cloud-native applications, efficient and scalable storage solutions are crucial. Amazon S3 (Simple Storage Service) is a popular object storage service, but what if you could use it as local storage in your Kubernetes clusters? Enter the S3 CSI (Container Storage Interface) Driver, a game-changer for developers and operations teams alike.

Read More
Free & Secure Local Development: Bitwarden Secrets Manager with K3d + Walkthrough

Free & Secure Local Development: Bitwarden Secrets Manager with K3d + Walkthrough

Originally posted on the Israeli Tech Radar on medium.

Free & Secure Local Development: Bitwarden Secrets Manager with K3d + Walkthrough

We all deal with secrets, and managing them effectively is crucial for security, especially when working with local development environments. I’ve personally relied on Bitwarden’s free tier for secure password management for years. I moved from a bunch of tools which I needed to be synchronized and kept in sync with my local machine, not because others are bad, but because Bitwarden has a CLI, Desktop, Chrome extension & web which seemly integrated with my workflows.

Read More