AI agents in practice: part 1
This article is the first part of AI Agents in Practice, a series covering everything you need to know about AI agents. In this part, we explain what AI agents are and how they technically work. In part 2 we answer the practical question: what should you use them for, and what not?
Everyone’s talking about AI agents. Vendors call with promises about autonomous systems that can replace your entire department and LinkedIn is full of posts about “AI employees”. But what are agents really, and why is everyone so excited about them?
An AI agent does things, not just returns text
The core is surprisingly straightforward: an AI agent is an AI application that autonomously performs actions. Not just returning text, but actually doing things: sending an email, retrieving data from your CRM, creating a ticket in Jira or modifying a file.
The difference with ChatGPT? With ChatGPT, you give a command each time and get text back. You’re actually the agent: you do what ChatGPT says. With an AI agent, that’s reversed, the AI does what you say.
That might sound subtle, but the difference is fundamental. A chatbot gives you a recipe; an agent makes the food.
Chatbot
Provides information and suggestions, but you execute the actions. 'You could send an email to John with this text...'
AI agent
Executes actions itself based on a goal. 'I've emailed John, created the ticket and updated the status in the CRM.'
Four characteristics distinguish agents from automation
What makes an AI agent an agent? In the literature, four core characteristics are mentioned:
Autonomy
An agent can make decisions and execute actions independently without a human approving every step. It receives a goal and figures out how to achieve it.
Goal-orientation
An agent works towards a specific goal. It plans steps, executes them and adjusts its approach if something doesn't work.
Perception
An agent can 'perceive' its environment through the tools it has. It reads data from systems, checks statuses and reacts to what it finds.
Adaptability
If an action fails or the situation changes, an agent adjusts its plan. It's not rigid but flexible in how it achieves its goal.
These characteristics distinguish agents from simple automation. A cronjob that emails a report every morning is not an agent. A system that determines which report is relevant, retrieves the data and only emails when there are anomalies, that is an agent.
AI agents are already widely deployed
AI agents aren’t future music. They’re already widely deployed:
Customer service
Companies like Klarna use AI agents to answer customer questions, route tickets and solve standard problems. Available 24/7, no waiting time.
Virtual assistants
Siri, Google Assistant and Alexa are actually AI agents. They understand your question, determine which action is needed (search, call, set reminder) and execute it.
Sales and marketing
Agents that qualify leads, send personalized follow-ups and update CRM data. Or monitor campaign performance and send alerts for anomalies.
Software development
Code assistants like Cursor and GitHub Copilot are agents that write code, fix bugs and suggest refactoring. The developer reviews and approves.
Data analysis
Agents that automatically generate reports, spot trends and detect anomalies in large datasets. From Google Analytics to financial data.
The pattern is always the same: tasks that previously required human attention but have a clear structure are taken over by agents.
Function calling is the mechanism, not magic
A few years ago, “agents” meant something completely different: multiple LLMs talking to each other, a kind of virtual team. Frameworks like AutoGen and CrewAI made this possible, but the focus has shifted. Modern models are smart enough to weigh multiple perspectives themselves, and for most use cases you don’t need a team of agents. One smart agent with the right tools is enough.
What’s now sold as agents is actually something much simpler: function calling. This is the mechanism by which an LLM can call external tools. The AI determines which tool it needs and fills in the parameters:
You ask a question
'What's the weather in Amsterdam?' or 'Send an email to the sales team about the new pricing.'
The model chooses a tool
The LLM sees which tools are available (weather API, email API, CRM API) and chooses the right one based on your question.
The model fills in parameters
For the weather API: city='Amsterdam'. For the email API: recipient='sales@company.com', subject='New pricing', text='...'
The tool is called
The API call is executed and data comes back, or the requested action is performed.
The model processes the result
The response is fed back to the model, which processes it into an answer for you.
That’s it. No magic, no consciousness, no real autonomy; just an LLM that’s smart enough to determine which API to call.
MCP: an open standard for tool integration
You may have heard of MCP, the Model Context Protocol. This is an open standard, introduced by Anthropic in late 2024, for how AI models communicate with external tools and data sources. It standardizes the entire interaction: how the agent discovers which tools are available, how it calls them and how results come back.
MCP is like a universal adapter. Instead of building a separate integration for each tool, you use one protocol that more and more platforms support.
With MCP, you can relatively easily add new tools to your agent: Google Analytics, Slack, your own database. As long as there’s an MCP server for it, your agent can talk to it. Major players like Microsoft and various AI platforms have adopted MCP, though each provider (like OpenAI) also has their own variants.
Conclusion: building is easy, controlling is the problem
The technology behind AI agents is surprisingly simple. Function calling and MCP make it possible for anyone to build a working agent in an afternoon with tools like n8n or Make.com. But the real challenge isn’t in the building.
The technology is simple
An AI agent is an LLM that calls tools via function calling. With MCP as a standard, it's becoming easier and easier to connect new tools.
Control is the problem
Building is easy, but how do you know if the agent does what you want? And how do you prevent it from doing things you didn't intend?
Now that you know what AI agents are and how they technically work, the logical follow-up question is: where do they work well, and where do things go wrong? In part 2: when to use them and when not we cover concrete examples, including the infamous red button problem and the context problem.
Need help with your agent strategy?
We help companies determine where AI agents make sense and where they don’t. Honest advice, no sales pitch. In a free 1.5-hour consultation, we discuss your situation and provide concrete recommendations.