
Infrastructure as Code: Navigating Declarative and Imperative Approaches
- Haggai Philip Zagury (hagzag)
- Dev ops , Infrastructure as code
- January 15, 2024
Table of Contents
Originally posted on the Israeli Tech Radar on medium.
I read somewhere in my late night browsing that 71% of infrastructure as code is done using Terraform. That’s a huge number, right?, and although I may not be accurate the truth isn’t that far from it. It’s almost become the default choice. And what if I told you that the default isn’t always the best for everyone?
When it comes to IaC, I’ve been in the trenches, wrestling with both declarative and imperative Infrastructure as Code (IaC) approaches, from helping small teams juggle dozens of environments to witnessing giant corporations struggle with just a few!
I’ve felt the pain of choosing wrong and the joy of getting it right. It’s not about blindly following trends; it’s about picking the right tool for your team, and your situation.
Let’s be real, IaC is not optional these days — it’s a necessity to scale, and the choices you make now drastically impact how you’ll manage your infrastructure.
So, what’s the deal with
- Declarative versus imperative and what so specific in Infrastructure As Code ?
- Do we all just default to a configuration language such as Terraform or should it be a coding style based — one?
If you bare with me, I would like to share what I’ve learned.
The Core Concepts: Declarative vs. Imperative
At a high level, the imperative method involves coding step-by-step instructions to achieve a specific outcome. You’re essentially writing a recipe for how to build your infrastructure. On the other hand, a declarative approach involves defining the desired result and letting the system figure out the how, based on pre-programmed rules, models, and logic. Think of it this way: if you wanted to tell someone how to get to the coffee machine, imperative is listing every turn and street, while declarative is simply saying, “Go to the coffee machine”.
In the IaC world, this difference boils down to whether you’re defining the specific actions to build infrastructure or defining the final state of it. This could be a software language such as Go
or Python
, or a Domain Specific Language (DSL)
that may have evolved into a configuration language, like hashcorp’s HCL
.

IaC Landscape Evolved to Declaratively
Until recently, almost by default any customer primarily relied on declarative Infrastructure as Code tools to manage infrastructure.
If you’re a single cloud based shop you might have chosen a tool provided by the cloud provider. This was no surprise to me. If I go back to my days as a system administrator ,each vendor I chose — came with its “Operations Book”. If I hop back to today, the cloud providers have the Operational Manual in a form of API. Tools such as AWS CloudFormation
, Azure ARM
or Google CDM
, provide a predictable, state-managed approach, and state from my point of view is ideal for maintaining stable environments.

To be honest the programmatic or imperative approach wasn’t a real choice, at the beginning only libraries such as the AWS boto3
or the GCP’s google-cloud-sdk
were considered as a development project, not a configuration project. The declarative approach was the only real choice for IaC. Until tools like Terraform came along, writing your infrastructure as code was a burden you either did once and tried to forget about it, or avoided altogether and did it manually. 🤯
The real issues occurred when the cloud providers started to offer their own tools, and the need to manage multiple clouds arose. At some point in time, many companies start to ask themselves, Is one cloud provider enough for me❓
And at that decision point many of them also reconsider their IaC strategy, Can you reuse your current toolset❓
And the final question build doing to, What approach tool/framework should I go with❓
To align, when we first started it was boto3
or google-cloud-sdk
libraries or tools like Ansible
, Chef
or Puppet
, which created a DSL for each of them until that point there was no real state management, nowadays State is the name of the game in both approaches which opened IaC to much more than just infrastructure provisioning.
IaC, from “Cowboy Style” to continuous Integration and Delivery
The early days of Infrastructure as Code, even after tools like Ansible, Chef, and Puppet emerged, often felt like the Wild West. We’d moved beyond manual configurations, but were still often running Terraform directly from the “DevOps team’s laptops” with root account credentials — a practice I like to call ”Cowboy Style” It worked in the short term, but it was far from ideal.

The real game-changer wasn’t just the ability to define infrastructure as code; it was the introduction of the standardized lifecycle — init
, plan
, apply
, and destroy
— that Terraform kind of coined.
This lifecycle became the foundation for how we manage IaC and was later adopted by other tools like Pulumi and AWS CDK (with their own versions, e.g., preview/synth
,up/apply
.
These stages allowed us to not just create infrastructure, but also to manage it in a predictable, repeatable, and auditable way.
This shift was crucial for integrating IaC into modern CI/CD pipelines. Suddenly, we could move beyond ad-hoc deployments and achieve a state where small groups of engineers can reliably manage infrastructure, ensuring it’s aligned at all times.

We could now treat infrastructure deployment like any other code deployment, with all the benefits that come with it: version control, automated testing, and continuous delivery.
And With advanced declarative IaC approaches we’ve adopted at Tikal, we’re using crossplane
which is a Kubernetes controller with to continuously ensuring the designed and the current state are aligned think of this as an alternative to a traditional continuous integration (CI) which it coupled into the continuous delivery (CD) process infrastructure declaration, and are packed in helm charts which is purely declarative.

Evolving Beyond the Traditional Imperative or perhaps the rise of “Programmatic” IaC
When we talk about “imperative” IaC, it’s a bit misleading. It’s not about writing infrastructure code from scratch using raw APIs, we’re not back in the early days of scripting VMs. The evolution has been towards what I’d call “programmatic” IaC, where we use higher-level constructs, libraries, and frameworks to define our infrastructure.
Originally, we would write scripts, in bash and later on in python to create key pairs, spin up instances, etc. While technically imperative — we were giving step-by-step instructions — it was very cumbersome!
We had to reinvent the wheel each time, and that’s where things got a bit messy: we’d copy scripts from project to project, and when that became unsustainable, we started packaging them as pip packages — eventually we started using them just like regular software libraries.
This was an improvement, but still wasn’t great. When tools like Ansible came along, giving us a DSL that defined the same things we’d previously coded, and we even learned how to distribute configuration packages, again, just like traditional software packages.
We resulted with Ansible playbooks and roles or recipes and run-books … which was in many cases yaml
— a configuration.
Despite these Dadvancements, we were still dealing with a lot of manual configuration and a lack of standardized lifecycles, we had servers like chef / puppet master or AWX and we were still looking for methods that developers looked at infrastructure as a code project — or we shouldn’t have named it Infra As Code — we already had templating systems like jinja2
or go-templating
we needed something which helped manage the state and repetitive nature into our infrastructure which is beyond the scope of templating.
It was cumbersome and inefficient, and that’s where frameworks like AWS CDK
, TfCDK
, and Pulumi
came into play. These tools have significantly changed the game, considering they allow you to leverage languages you’re already comfortable with – Python
, Go
, JavaScript
, TypeScript
– rather than forcing you to learn a new DSL or configuration language.
They provide pre-defined constructs
and encapsulate best practices, making your code more maintainable and consistent. They also give you access to the power of native languages, like being able to use aspect-oriented programming or cross-cutting concerns in your infrastructure.
In a sense, they’ve transformed “imperative” from a laborious task into a more controlled “programmatic” approach.
While it’s not fully declarative — you’re still defining steps in a language– you’re doing it with robust libraries, pre-defined elements, and within a well-defined framework, plus lifecycle management. Evolving Beyond the Traditional Imperative or perhaps the rise of “Programmatic” IaC
So, when we talk about “imperative” IaC, it’s a bit misleading. It’s not about writing infrastructure code from scratch using raw APIs like we did as sys-admins ;), we’re not back in the early days of “scripting VM’s”. The evolution has been towards what I’d call “programmatic” IaC, where we use higher-level constructs, libraries, and frameworks to define our infrastructure.
What I mean is, when you consider the alternatives of writing real software from scratch — like using software library – it’s not really an option for most. It’s simply too much investment when there are robust and mature alternatives out there.
The Lifecycle: init, plan, apply, destroy — The Foundation of Modern IaC
Before tools like Terraform gained prominence, a crucial piece was missing from the IaC puzzle: a standardized lifecycle management, and although the cloud providers provide those out of the box they are deeply coupled to many proprietary services which may introduce lock-in and unecesarry costs.
Until terraform and its 12factor architecture, we lacked a consistent way to prepare, preview, and execute infrastructure changes.
What we were missing was the concept of having a dedicated init stage, which is much like compilation or environment setup in a typical software development project. This is where we set up our environment, download plugins, and get everything ready.
Next, the plan
stage allows us to preview the impact of our changes, letting us understand what will be created, updated, or deleted before anything is actually modified.
Finally, the apply
phase is where our defined changes take effect and our infrastructure is updated.

Other tools have adopted this lifecycle as well, using similar but slightly different naming conventions, like I mentioned above synth, preview etc, these tools have similar lifecycle management implementations and follow the same logical phases.
This consistency is the real game-changer, creating a standardized workflow that enables predictability and enhances our ability to manage complex infrastructure at ease. It’s like having a universal set of traffic lights for infrastructure deployments.
It’s a convention over configuration of the lifecycle: a standardized, predictable workflow, rather than inventing the same steps every single time, allowing you to focus on the “what” and not on the “how”.

When writing infrastructure code with a native language like python or go you use cloud provided libraries, that’s where you will be using the bare bone api’s but if there is no state management in place you are taking a risk of loosing state which may lead to unstable environments.
This also opens the door to a “shift left” approach where security checks, for example, can be integrated directly into our processes. For instance, before applying any changes if we create a security group and accidentally open a port to the world, a security scanning tool can detect this during the plan stage, helping to
These tools and frameworks brought this consistent lifecycle, which has allowed us to introduce automation with much more ease than we had before.
Previously when we used to write our configuration in python
or bash
, we used to invent the entire lifecycle ourselves, and we had to reinvent it for each and every project based on where and when our code would run.
But with the tools mentioned above, we just need to follow the guidelines and build around the lifecycle these tools offer and not build the lifecycle itself, very similar to how we use build tools such as maven
, gradle
, npm
, or yarn
for our software projects to both run scripts and manage packages.
So how does all the above help you decide ?
Well, Now that we have established this basis, when you’re considering which approach to take or if there’s an overlap between them, consider the following 4 questions:
- Are you ready for an IaC strategy❓
- Modularity, and how to manage configuration & code❓
- Where do we store the state❓
- How do you ensure standards, compliance security etc❓
These should be the question asked and the method of doing them will become obvious.
I will attempt to answer them one by one, based on my experience and considering they will need attention regardless to the so called imperative — programatic approach vs the declarative approaches.
Are you ready for an IaC strategy❓the short answer is yes ❗
- If you’re at early stages, choosing a path is necessary, it doesn’t really matter as long as you implement IaC in some form and please don’t even consider doing it manually a 2nd time!

There are ready made solution for both approaches, from the programmatic driven as mentioned above to ready solution backed by the cloud providers, lookup eks-blueprints
(terraform / aws-cdk) as an example of their offerings.
As someone once taught me we don’t start with micro-sercies, our 1st version is a monolith and only after the MVP we start breaking it down to services … which brings as to modularity.
Modularity, and how to manage IaC configuration & code !
Either have some shared infrastructure repository or have some shared infrastructure as code repository e.g even with the “Programmatic” and the “Declarative” approaches there are different styles and ways to do it…, this approach is very similar to the microservices & polyglottism approaches; we let each team decide, which may mean chaos in some companies and great success in others, add to the above Multiple cloud providers which may raise a bigger challenge of shared resources across providers — another result of todays fast growing demand of IaC and also in this domain A.I can execrate the process but lets not focus on that now.
This IaC per project style, may help you figure out the velocity between your different teams and approaches — iv’e been a part of a large “DevOps” team which had python code that generates terraform code, whilst a different project was using terraform and a third was using terragrunt, I kid you not, and this actually worked well within the scope and the different teams disciplines.

In both approaches you can measure based on desired state, and let each team decide on how to go about doing it whilst you can have a lot of flexibility over unity in process — init, plan ,apply, destroy etc etc. The larger the plan the harder is is to manipulate it — there are some downsides to monologging it … this where crossplane for example takes terraform to another level of modularity, but it comes with a similar cost to what you would do with terraform or pulumi jest replacing the templating system and the configuration language.
Complexities may occur when you need to store common libraries in code which may provide
constructs
which are equivalent to terraform’smodules
, but you have to make sure that you are not duplicating code.
💡 If it’s your first IaC project I would avoid separation of configuration and modules, whilst i’d recommend the opposite to large teams.
- That “freedom of choice” will also help you figure out the velocity between your different teams and their approaches whilst on the other hand that flexibility comes with the price of the project, and as long as they deliver on their KPIs —
From the process of consulting companies that acquired other companies and how they managed to integrate the different teams and approaches. like every project: It’s doable if it’s done at the right time, and it will take time if it’s not… 🤔
Where do we store the state or state files❓

side note: in order to stay on track i’m avoiding state locking mechanism’s which also endure reliability and concurrency if done right, these are also ensured in all of the tooling mentioned here.
It’s a tough task to choose between self-managed, homegrown, open-source, proprietary, or as-a-service solutions! remember, “no one got fired for choosing IBM… — terraform included to its recently …”, just saying ;)
If you’re using a specific service in a specific cloud, a quick win may be to go with the cloud provider’s object storage, this simplifies authentication and authorization issues between providers, and if you don’t have sso for all your cloud providers you should choose the simple option, you could for example use s3 bucket for storing the state file, or use hashicorp’s consul on your shared infrastructure, which could be on premise or on cloud.
If you know you are going to utilize multiple clouds and wish to use the same tooling for all of them; all of these tools, except
aws cdk
which stores it’s state in CloudFormation metadata and is cloud specific.terraform
,tfcdk
, andpulumi
store their state in a state file, behind a backend which may be a on top of the mentioned above a local file, an AWS / Azure / Google object storage bucket or a custom HTTP endpoint.There are also controller-driven approaches like
tf-controller
,crossplane
, which stores its state in aterraform
state file with kubernetes secrets as another optional backend, the options are as modular as they get, depending on how well defined your solution is.
💡 The current main options are “programmatic” and “declarative,” but if you look at the tools, you can see that
cdktf/pulumi
have a mix of both. It’s not fully declarative, but it’s not fully imperative either 🤔
How do you ensure standards❓

As mentioned one of the main features these tools brought to the table is lifecycle management is the best way to ensure that you can dispose of the services you no longer need as part of your IaC process.
This also adheres to the Disposability principle of the 12 factor application, imperative approaches in most cases only to take care of creating the resources and not disposing of them, you could form an Ansible playbook to dispose of the resources, but that’s not the same as a declarative approach where you can dispose of the resources in the same way you created them.
Also note: most ci platforms have workflows for all the tools mentioned in this post, you may want to build you own based in them with your own custom steps, but that’s a different story.
Let’s not forget security aspects: Considering the different stages in the lifecycle are well defined, we can shift left and right to ensure that we are not introducing security risks.
As a common example, when defining a security group, we can define the rules in a way that we can ensure that we are not opening up any ports that we don’t need. This applies to any security standards or compliance you may need to adhere to.
If our Core Dilemma still remains consider looking at the flexibility on one hand vs the predictability of the other.
Imperative Flexibility vs. Declarative Predictability
The fundamental decision between declarative and imperative IaC approaches isn’t just about technical implementation — it’s about aligning your infrastructure management strategy with your team’s strengths, project requirements, and organizational culture.
As we moved from monolithic approaches, where we are much more procedural than we used to be, and so did our processes & interfaces as part of the shift to micro-services …

What is certain is, if in 2010 you’d ask yourself should I be investing in IaC?, nowadays Infrastructure As Code is a necessity: you either invest in it, or you don’t scale …
Hence the bottom line
As we’ve well established the lines between the two have blurred …, and as the questions above imply — what you need to answer is does your team have the ability to deliver.
Feature | “Imperative” Programmatic Approach | Declarative Approach (Configuration-Style) |
---|---|---|
Ideal For | Teams with strong software engineering backgrounds | Operations teams with infrastructure expertise |
Key Strengths | Offers maximum flexibility and programmatic control. Enables complex, dynamic infrastructure provisioning. Leverages existing programming knowledge and tools. Better supports aspect-oriented and cross-cutting concerns | Simplified, state-managed infrastructure. More predictable and easier to audit. Lower barrier to entry for infrastructure teams. Clear, straightforward representation of desired state. |
Considerations | Higher learning curve for operations-focused teams. Can lead to more complex, harder-to-maintain code. Risk of over-engineering infrastructure definitions. | Less flexible for complex, dynamic requirements. Limited programmatic capabilities. May require more verbose configurations |
It’s no surprise here… Conway’s law is a good reference point to consider when making this decision.

The most critical consideration in choosing an IaC approach is your team’s skill set and their comfort level.
As mentioned, I’ve accompanied ~10’s of customers in their IaC journey. Some were getting started and somewhere in “friction hell”, very similar to the approach we took with microservices: each service chose its own tools for IaC — which was great to get started, but what happens when you have 1,500 multiplied by n of services?!
When you look at IaC automation, the requirements of the customer vary from “it runs from the DevOps machine/laptop” to “full-fledged CI-CD pipelines”. The choices aren’t simple to make when you are mature. Some workflows fit small teams while complex workflows may be suitable for large teams/groups. For some teams the choice was obvious; for some, it was very challenging…
In some cases, the tools were already selected, and we needed to allocate and automate those friction points… the above thoughts was what I used to evaluate which approach to take.
I hope you find this post useful and if you have any questions or comments, please feel free to reach out to me, your sincerely HP.