AI agents in practice: part 1
This article is the first part of AI Agents in Practice, a series of articles covering everything you need to know about AI agents. In this part, we explain what AI agents are and how they technically work. In part 2 we answer the practical question: what should you use them for, and what not?
Everyone’s talking about AI agents. Vendors call with promises about autonomous systems that can replace your entire department. LinkedIn is full of posts about “AI employees”. But what are agents really? And why is everyone so excited about them?
The simple definition
The core is surprisingly straightforward: an AI agent is an AI application that autonomously performs actions. Not just returning text, but actually doing things. Sending an email. Retrieving data from your CRM. Creating a ticket in Jira. Modifying a file.
The difference with ChatGPT? With ChatGPT, you give a command each time and get text back. You’re actually the agent: you do what ChatGPT says. With an AI agent, that’s reversed. The AI does what you say.
That might sound subtle, but the difference is fundamental. A chatbot gives you a recipe. An agent makes the food.
Chatbot
Provides information and suggestions. You execute the actions. 'You could send an email to John with this text...'
AI Agent
Executes actions itself based on a goal. 'I've emailed John, created the ticket, and updated the status in the CRM.'
The four characteristics of AI agents
What makes an AI agent an agent? In the literature, four core characteristics are mentioned:
Autonomy
An agent can make decisions and execute actions independently without a human approving every step. It receives a goal and figures out how to achieve it.
Goal-orientation
An agent works towards a specific goal. It plans steps, executes them, and adjusts its approach if something doesn't work.
Perception
An agent can 'perceive' its environment through the tools it has. It reads data from systems, checks statuses, and reacts to what it finds.
Adaptability
If an action fails or the situation changes, an agent adjusts its plan. It's not rigid, but flexible in how it achieves its goal.
These characteristics distinguish agents from simple automation. A cronjob that emails a report every morning is not an agent. A system that determines which report is relevant, retrieves the data, and only emails when there are anomalies, that’s an agent.
Where are AI agents already being used?
AI agents aren’t future music. They’re already widely deployed:
Customer service
Companies like Klarna use AI agents to answer customer questions, route tickets, and solve standard problems. Available 24/7, no waiting time.
Virtual assistants
Siri, Google Assistant, and Alexa are actually AI agents. They understand your question, determine which action is needed (search, call, set reminder), and execute it.
Sales and marketing
Agents that qualify leads, send personalized follow-ups, and update CRM data. Or monitor campaign performance and send alerts for anomalies.
Software development
Code assistants like Cursor and GitHub Copilot are agents that write code, fix bugs, and suggest refactoring. The developer reviews and approves.
Data analysis
Agents that automatically generate reports, spot trends, and detect anomalies in large datasets. From Google Analytics to financial data.
The pattern is always the same: tasks that previously required human attention, but have a clear structure, are taken over by agents.
From hype to reality
A few years ago, “agents” meant something completely different. It was the idea of multiple LLMs talking to each other. A kind of virtual team. “Tom the marketer” talks to “John the developer” and together they come up with a solution. Frameworks like AutoGen and CrewAI made this possible.
That approach still exists, but the focus has shifted. Modern models are smart enough to weigh multiple perspectives themselves. For most use cases, you don’t need a team of agents. One smart agent with the right tools is enough.
What’s now sold as agents is actually something much simpler: function calling.
How function calling works
Function calling is the mechanism by which an LLM can call external tools. The AI determines which tool it needs and fills in the parameters.
You ask a question
'What's the weather in Amsterdam?' or 'Send an email to the sales team about the new pricing.'
The model chooses a tool
The LLM sees which tools are available (weather API, email API, CRM API) and chooses the right one.
The model fills in parameters
For the weather API: city='Amsterdam'. For the email API: recipient='sales@company.com', subject='New pricing', text='...'
The tool is called
The API call is executed. Data comes back. Or the action is performed.
The model processes the result
The response is fed back to the model, which processes it into an answer for you.
That’s it. No magic, no consciousness, no real autonomy. Just an LLM that’s smart enough to determine which API to call.
MCP: an open standard
You may have heard of MCP: Model Context Protocol. This is an open standard, introduced by Anthropic in late 2024, for how AI models communicate with external tools and data sources. It standardizes the entire interaction: how the agent discovers which tools are available, how it calls them, and how results come back.
MCP is like a universal adapter. Instead of building a separate integration for each tool, you use one protocol that more and more platforms support.
With MCP, you can relatively easily add new tools to your agent. Google Analytics, Slack, your own database, whatever. As long as there’s an MCP server for it, your agent can talk to it. Major players like Microsoft and various AI platforms have adopted MCP, though each provider (like OpenAI) also has their own variants.
Summary
Agents = AI that does things
Not just returning text, but actually performing actions. Sending emails, retrieving data, creating tickets.
The technology is function calling
An LLM that can call external tools. Simpler than marketing makes it seem.
Building is easy, testing is hard
With n8n or Make.com, you build an agent in an afternoon. But how do you know if it does what you want?
What’s next?
Now that you know what AI agents are and how they technically work, the logical follow-up question is: where do they work well, and where do things go wrong?
In part 2: when to use them and when not we cover concrete examples, including the infamous red button problem and the context problem.
Need help with your agent strategy?
We help companies determine where AI agents make sense and where they don’t. Honest advice, no sales pitch. In a free 1.5-hour consultation, we discuss your situation and provide concrete recommendations.