From Prompts to Agents — a DevOps Engineer navigating the AI Landscape

From Prompts to Agents — a DevOps Engineer navigating the AI Landscape

Table of Contents

TLDR; A.I is no longer a futuristic concept; it’s fundamentally changing how we work today. For those of us specializing in DevOps, SRE, and platform engineering, this shift presents both incredible opportunities and new challenges. We’ve moved quickly from debating if AI will impact our work to figuring out how to effectively integrate it and adapt our skillsets to stay relevant and valuable.

I’ve personally navigated this rapidly evolving landscape, starting with simple prompt-based AI tools and graduating to more sophisticated “agentic” approaches. In this post, I’d like to share my perspective as a DevOps engineer on this journey, highlighting the differences between AI assistants and agents, and exploring practical applications you can implement today.

Phase 1: The Era of AI Assistants — from Search to “Code Snippet” Helper(s)

Like many, my first foray into AI for coding involved tools focused on assisting with individual tasks. Think of these as the “search prompt” era. Tools like GitHub Copilot, or earlier iterations of Cursor and Cline, primarily function as intelligent code completers and quick knowledge sources.

Start by asking the right question

These tools are fantastic for boosting productivity on isolated coding tasks — generating boilerplate, remembering API calls, or helping with repetitive code patterns. They excel at writing functions, suggesting command flags, or explaining a piece of code you’re looking at right now.

However, their effectiveness often wanes when faced with tasks that require a broader understanding of the entire project, the surrounding infrastructure, or a complex sequence of operations. They are reactive, waiting for your prompt, and typically don’t maintain a persistent context or understand high-level objectives without constant re-prompting. For complex DevOps workflows spanning multiple files, commands, and systems, this “code snippet” approach, while helpful, felt limiting.

Phase 2: Towards the Agentic Approach — Understanding Context and Taking Initiative

This is where the concept of the “AI Agent” comes into play, and I believe it represents a significant step forward, particularly for DevOps professionals.

Unlike assistants that wait for precise prompts for isolated tasks, agents are designed to:

Understand Project Context

Analyze your codebase, configuration files, command history, and potentially even documentation to build a model of your project and environment.

Providing Gen-Ai | the relevant context

Accept Broader Instructions

You can give them higher-level goals such as “Set up the local development environment,” “Diagnose the failing deployment” — rather than just asking for specific code blocks or commands.

cursor rules | adding specific high leve instructions

Execute Multi-Step Tasks

They can determine the necessary steps to achieve the goal, execute commands sequentially, analyze the output, and adapt their plan based on results.

A Cline agent mode | breaking it down to tasks

Maintain Memory/Guidelines

They can remember previous interactions, project-specific conventions, and custom instructions you provide.

AI Agent in Action: From Theory to Practice

Let me illustrate this shift from reactive assistance to proactive agency with a real-world scenario that many DevOps engineers face daily.

A Common Challenge: Setting Up a Development Environment

Imagine you’re onboarding a new team member, or you need to replicate a production issue locally. The traditional approach might involve:

  • Hunting through documentation for setup steps
  • Running multiple commands in sequence
  • Troubleshooting environment-specific issues
  • Ensuring all configurations match your team’s standards

With an AI assistant, you’d ask for individual commands: “How do I start the database?” or “What’s the command to install dependencies?” Each question requires a separate prompt, and the AI has no memory of your project’s specific requirements.

Enter AI Agents: Context-Aware Automation

This is where tools like Claude Code and similar agentic platforms fundamentally change the game. Instead of asking for individual commands, I can give a high-level instruction: “Set up the local development environment for this project.”

The agent then:

  1. Analyzes the project structure — examining package.json, docker-compose.yml, README files etc
  2. Understands the context — recognizing this is a Node.js project with a PostgreSQL database
  3. Executes multiple steps — installing dependencies, starting services, running migrations
  4. Adapts to issues — if a port is already in use, it suggests alternatives
  5. Validates the setup — checking that services are running and accessible

In the past we had to provide a Makefile (or a Taskfile) which did all the above — nowadays an agent can build them on the fly based on context!

The Terminal-First Advantage

Similar to A.I driven shells such as WARP, what immediately resonated with me about Claude Code was its terminal-first approach. While application developers might prefer IDE’s, we DevOps engineers live in the terminal. Having an agent that operates natively in this environment feels natural and eliminates context switching.

When I initialize Claude Code in a project with /init, it doesn’t just start with a blank slate. It:

  • Scans the project structure
  • Identifies common patterns and tools
  • Reads documentation and configuration files
  • Creates a foundational understanding in a claude.md file

The Power of Persistent Context

Claude Code | provide a.i instruction persistence

Here’s where the real magic happens. The claude.md file becomes the agent’s “memory”—its understanding of your project. But more importantly, you can edit this file to encode your team’s specific practices:

    # Project Guidelines for AI Agent
    
    ## Environment Setup
    - Always source `.env.local` before running commands
    - Use `npm run dev:secure` instead of `npm run dev` for HTTPS
    - Database migrations must run before starting the server
    ## Safety Protocols 
    - Never run destructive commands without confirmation
    - Always backup database before schema changes
    - Use staging environment for testing deployment scriptS

Now when I ask the agent to “prepare the environment for testing,” it doesn’t just start the application — it follows our specific protocols, sources the right environment file, and includes our safety checks.

A Practical Example

Let me show you this in action.

Traditional Approach:

    # Me: "How do I start the development server?"
    # AI: "Run npm start"
    # Me: "It failed, the database isn't running"
    # AI: "Start PostgreSQL with brew services start postgresql"
    # Me: "Now I need to run migrations"
    # AI: "Use npx prisma migrate dev"
    # ... and so on

Agentic Approach:

    # Me: "Set up the development environment"
    # Agent analyzes project, checks claude.md guidelines
    # - Starts PostgreSQL service
    # - Sources .env.local file
    # - Installs any missing dependencies
    # - Runs database migrations
    # - Starts the development server with HTTPS
    # - Validates all services are running
    # - Reports: "Development environment ready at https://localhost:3000"

The difference is profound: instead of managing individual steps, I’m delegating entire workflows while maintaining control over how they’re executed.

Getting Started: Your First Steps with AI Agents

Ready to make the transition? Here’s how to begin:

  1. Choose Your Tool: Start with Claude Code, Cursor, Roo-code or Cline to experiment with agentic approaches they all come with built-in tools like accessing local file system, read/write files, execute commands etc.
  2. Initialize Your Project: With tools like cluade code use the /init command to let the agent understand your project structure / and create a README.md and instruct the tools to learn what your project is all about.
  3. Define Your Guidelines: Create or edit the claude.md or spec.mdfile with your team’s specific practices
  4. Start Small: Begin with simple tasks like “set up development environment” or “run tests”
  5. Iterate and Refine: Adjust your guidelines based on what works and what doesn’t

What’s Next?

This shift from reactive assistance to proactive agency becomes even more powerful when we integrate it with your broader development ecosystem. In my next article, I’ll explore how Model Context Protocols (MCPs) give agents access to your real development environment — your Git repositories, Kubernetes clusters, and configuration files — transforming them from helpful assistants into genuine operational partners.

The transformation is already underway. The question isn’t whether this will change how we work — it’s whether we’ll lead this evolution or be led by it.

If you find this post informative i am curious to learn

  • what’s your experience with AI agents in your DevOps practice?

  • Have you started experimenting with agentic approaches? Share your thoughts in the comments below.*

Next up: Building Production-Ready AI Agent Workflows: MCP Integration and Operational Excellence

comments powered by Disqus

Related Posts

Scaling I/O Bound Microservices

Scaling I/O Bound Microservices

In this meetup, we will continue our #2ndhalf journey to the next^2. You can see the talks from the 1st meetup of this series here - http://bit.ly/second-half-p1 Nowadays, scaling and auto-scaling have become relatively easy tasks. Everyone knows how to set up auto-scaling environments - Auto-Scaling groups, Swarm, Kubernetes, etc.

Read More
Planning a production ready kubernetes with fundamental Controllers & Operators — Part 4

Planning a production ready kubernetes with fundamental Controllers & Operators — Part 4

Originally posted on the Israeli Tech Radar on medium.

Welcome back to part four of our series on building a production-ready Kubernetes cluster with fundamental controllers and operators! In the previous parts, we explored essential components like Secrets and DNS management. Today, we’ll delve into the world of Ingress, a critical concept for routing external traffic to your applications within the cluster. To explain Ingress, I’ll be taking the Analogy approach, I’ll use the analogy of a city compared to a modern distributed computer:

Read More