Planning a production ready kubernetes with fundamental Controllers & Operators — Part 4

Planning a production ready kubernetes with fundamental Controllers & Operators — Part 4

Table of Contents

Originally posted on the Israeli Tech Radar on medium.

Welcome back to part four of our series on building a production-ready Kubernetes cluster with fundamental controllers and operators! In the previous parts, we explored essential components like Secrets and DNS management. Today, we’ll delve into the world of Ingress, a critical concept for routing external traffic to your applications within the cluster. To explain Ingress, I’ll be taking the Analogy approach, I’ll use the analogy of a city compared to a modern distributed computer:

15 f
**DALL-E **| Image ingress as an entrance to a city
  • **Growing City, Expanding Roads **As a city grows, so do its roads and pathways. Similarly, as your Kubernetes cluster expands, you need more sophisticated routing and traffic management. Ingress acts like a network of roads and pathways directing external traffic to the right applications within your cluster.

  • **Diverse Forms of Traffic **Just as a city has various forms of transportation (cars, buses, bicycles, trams …), Ingress handles different types of network traffic such as HTTP and HTTPS. This ensures that requests reach their destinations efficiently and securely.

  • **Passport Exchanges and SSL **Think of some cities as very strict states requiring a passport exchange every time you enter (analogous to HTTP strict security checks), take th Vatican city as an example they check your passport when you enter and they don’t seem to care much when you leave. Other cities communicate freely without passport exchanges within the city (similar to SSL within a cluster). However, when communicating with other cities or countries (other clusters or computers), they might require mutual SSL, ensuring secure and verified exchanges of information.

Key Points of Ingress**

  • Routing External Traffic: Ingress routes external traffic into the cluster, directing it to the appropriate services based on defined rules.

  • Handling Different Protocols: Ingress manages HTTP and HTTPS traffic, ensuring secure and efficient communication.

  • Security and Verification: Like passport checks between cities, Ingress can enforce security measures such as SSL and mutual SSL for verifying and securing traffic between clusters.

In a nuts shell, Ingress is crucial for managing how external traffic accesses your applications within a Kubernetes cluster, providing secure and efficient routing similar to how a well-planned city manages its transportation (services), many cases there are different companies using different forms of transformation.

So it’s really a reverse proxy all over again ?!

15 f
DALL-E | Back to the good old reverse proxying

In the past, we used to set up a load balancer against our NGINX instance to manage traffic to a Jenkins on port 8080. This setup allowed for autoscaling, but required us to reconfigure the routing each time the hosts changed in the auto-scaling group. Now, Kubernetes handles all of this out of the box. With an Ingress resource pointing to a service targeting a pod, Kubernetes automates load balancing, scaling (which will discuss in more detail in part5 🤞🏼), and routing updates seamlessly (thanks to the cloud-config-controller).

15 f
Nginx | from good old nginx to ingress-nginx on k8s + plugins

Why is Ingress So Important ?

Imagine your Kubernetes cluster as a bustling city with various addresses (applications), each offering unique services. Customers (external traffic) need a way to find and access these addresses. Ingress acts as a gateway, directing this traffic efficiently.

  1. Cost (FinOps) 💸 From a cost perspective, using a service type LoadBalancer for each service can become prohibitively expensive. Each LoadBalancer instance typically incurs a recurring cost, which scales linearly with the number of services requiring external access. In contrast, an Ingress controller uses a single LoadBalancer to route traffic to multiple services within the cluster. This approach significantly reduces the number of LoadBalancers needed, thus cutting down on overall costs. For environments with multiple stages (e.g., staging and production), employing an internal and external Ingress for each stage remains more cost-effective than provisioning a separate LoadBalancer for every service.

  2. Security 🔐 Considering security, the LoadBalancer-per-service approach can introduce numerous entry points into the network, akin to having many entrances to a city. Each LoadBalancer exposes a potential attack vector, increasing the risk of security breaches. Conversely, an Ingress controller consolidates these entry points into a single managed instance, which can be fortified more effectively. By configuring separate internal and external Ingresses, you can segment traffic appropriately, enhancing security. This configuration mirrors traditional network setups, where internal and external traffic is handled separately, thereby minimizing the attack surface while maintaining efficient traffic management.

  3. Security ++ / vs. Cost Efficiency 🔐💰 When using multiple LoadBalancers, each one needs to implement routing rules and update them whenever there is a node change. With an Ingress controller, which equates to one LoadBalancer per cluster (or two if you have both public and private Ingresses), these configuration changes are less frequent. The Ingress controller, a new subsystem not included in a barebones Kubernetes cluster, handles application routing efficiently. Think of security as the police of the city. In the cloud world, when we provision a new LoadBalancer, we enable logging to monitor traffic, similar to assigning a cop to a new road to ensure it’s functioning correctly. Once traffic is established, the need for constant oversight diminishes unless required by regulations. This setup reduces the number of operations needed, such as IP tables updates or network plugin reloads.

  4. More vs. Less Flexibility & Scalability 🪢

  • Configuration updates risk all routes in the cluster !, Most common Ingress controllers have a webhook validations, which many may take lightly or overlook. If this validation is not properly managed, it can lead to disruptions in the routing configuration for the entire cluster. This means that while Ingress offers great flexibility, it also requires careful management to avoid misconfigurations that could impact all services.

  • You can always combine “service loadBlancer / ALB” for special use-cases and ingress for all the rest …

  • Ingress in dev, Application loadBalancer in production with high load

Literally anything can be done As A Service

I hope that at this point we’ve established the nececity for an ingress controller we’ve left if the joy of picking one (and i’m really choosing a simple use case here)

15 f
DALL-E | lets create an image paraphrasing choosing an ingress controller to choosing my religion by R.E.M

Installation details for various Ingress controllers are beyond the scope of this post but they all look similar in their basic configuration, here are some popular Ingress Controller options to consider:

  • Ingress-Nginx: A widely-used controller leveraging the power of Nginx for routing and load balancing.

  • Traefik: A dynamic and versatile Ingress controller known for its ease of use.

  • Contour: A lightweight and high-performance Ingress controller built specifically for Kubernetes.

  • HAProxy: A mature and battle-tested load balancer that can function as an Ingress controller.

  • Istio: A service mesh offering advanced traffic management capabilities, including Ingress functionality.

Choosing the right Ingress controller depends on your specific needs and preferences, given ingress-nginx is the most commonly used ingress by our customers let’s take a look at using nginx for basic http routing, the practice here should be very similar with other controllers due to the fact we have a resource named ingressClass and of course the resource mentioned above ingress.

In a nut’s shell both of these resources are recognized by Kubernetes’s standard api, the ingressClass is defined by the controller during installation and theingress resource references it in it’s declarative definition like so:

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx
spec:
  controller: k8s.io/ingress-nginx

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: test.exmaple.com
    http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test-service
            port:
              number: 80

Securing Your City: SSL Termination

Now that traffic flows smoothly, let’s consider security. For secure communication, we need to implement Secure Sockets Layer (SSL) termination. This process encrypts data between your visitors (external traffic) and the shops (applications) within the cluster.

There are three approaches to SSL termination with Ingress:

  1. Self-signed Certificates: This is a basic option for testing purposes. You can attach self-signed certificates directly to your Ingress resources. However, self-signed certificates come with browser trust warnings, not ideal for production environments.

  2. Cloud based / Purchased SSL certificate: One of the common options before we started with cloud providers was to purchase and SSL-certificate online and once you have the key and cert you were good to go — this is now something you can do via API with your cloud provides or even very cheap to free like the commonly used lets encrypt.

  3. Cert-Manager and Let’s Encrypt: For production, we recommend using a dedicated certificate management solution like Cert-Manager / a cloud based managed one. As the ingrtion with ingress-ngins is so simple ill take cert-manager and its integration with Let’s Encrypt, a free and trusted certificate authority, to automatically provision and manage valid SSL certificates for your Ingress resources.

Once our roure is set let’s start securing it with Cert-Manager and Let’s Encrypt

Let’s delve deeper into securing your Ingress resources with Cert-Manager and explore the Custom Resource Definitions (CRDs) that Cert-Manager utilizes to manage certificates.

15 f
DALL-E | Cert-Manager and Let’s Encrypt

Understanding Cert-Manager’s CRDs 🆔 🆕 🆒

Cert-Manager introduces several CRDs that define how certificates are requested, issued, and renewed within your Kubernetes cluster. Here are the key ones:

  1. ClusterIssuer: This CRD defines a source for issuing certificates. In our case, the issuer would be Let’s Encrypt. The ClusterIssuer specifies details like the Let’s Encrypt staging or production environment and any challenge solver configuration (more on that later).

  2. Issuer: Similar to ClusterIssuer, but defines an issuer specific to a particular namespace within the cluster. This allows for granular control over certificate issuance for different applications.

  3. Certificate: This CRD represents the actual SSL certificate you desire. The Certificate specifies the domain names the certificate should cover, the desired secret where the certificate and private key will be stored, and a reference to the issuer (either ClusterIssuer or Issuer) responsible for issuing the certificate.

  4. CertificateRequest: This CRD is an intermediate step used internally by Cert-Manager. When you create a Certificate resource, Cert-Manager automatically creates a corresponding CertificateRequest resource. This request is then sent to the issuer for processing.

As with other ingress controller mantioned above the integration is quite seamless, you may want to work agist the staging cert-manager api service until your ready to go prod — there are several caveates.

Integration with Ingress NGINX via Verification Phase / Webhook

Integrating Cert-Manager with Ingress NGINX often involves a verification phase, where a webhook is used to validate certificate requests. This webhook acts as an authority, ensuring that the certificate issuance process adheres to security policies and configurations. When a new certificate request is made, the webhook verifies that the request meets specific criteria before allowing the certificate to be issued. This step is crucial for maintaining the integrity and security of the routing setup. By validating the requests through a webhook, we can prevent misconfigurations that could disrupt the entire cluster’s routing. This integration streamlines the SSL certificate management process, making it more robust and secure, akin to having a city authority that ensures all new constructions comply with the city’s building codes.

An alternate setup of ingress nginx is to use a cloud provided loadbalancer with an SSL certificate attached which is managed by the cloud proiders ssl termination.

In overall as many roads you open to your city the more complexity of routing and it all depends on how crowded your city is — I’ll try to snapshot that analogy in subsequent posts (e.g scaling).

An Alternate Setup: Cloud-Provided LoadBalancer with SSL Termination

An alternative setup for Ingress NGINX is to use a cloud-provided LoadBalancer with an SSL certificate attached, managed by the cloud provider’s SSL termination. This setup offloads the SSL termination process to the cloud provider, simplifying management and enhancing security. By leveraging cloud services, you can ensure high availability and scalability, reducing the burden on your Kubernetes cluster.

Conclusion

As many roads you open to your city, the more complex routing becomes. The efficiency of traffic management depends on how crowded your city is. We will explore this analogy further in subsequent posts, focusing on scaling and maintaining performance in a growing Kubernetes environment.

Up next — scaling those moving parts — “the VM’s”, “the kubernetes worker-plane”, “minions” so many names fo the same thing.

Hope you enjoyed the post, will be glad to get your feedback — until Part 5, Yours sincerely, Haggai Philip Zagury.

comments powered by Disqus

Related Posts

Getting Up & Running with Kubernetes

Getting Up & Running with Kubernetes

Organized Chaos with Kubernetes for a startup/new companies Kubernetes is easy, for companies moving from Legacy / Bare metal & Multi-cloud is a whole different story, from setting up the k8s cluster to managing microservices and bridging on the gaps between new-gen and legacy infrastructure with cloud-native solutions given at Tikal

Read More

There’s no place like K3d continued — 2 — scaling with KEDA

TLDR; this is a lab used to prove concepts delivered in the production-readiness series I am writing in Tikal’s Tech Radar.

Read More
Terraform - the Defacto Tool for Infrastructure Provisioning

Terraform - the Defacto Tool for Infrastructure Provisioning

In this introduction session by Haggai Philip Zagury, DevOps Architect from Tikal, we will learn Terraform basics, starting from the basics to modules and some small tips and tricks you pick-up along the way.

Read More