Demystifying Agentic AI

June 17, 2025
June 17, 2025
5
min read

Lately, Agentic AI makes a lot of buzz on LinkedIn and use-cases seem to pop up everywhere. Indeed, it is a powerful technology that’s reshaping how we design and deploy artificial intelligence systems. In this article, I demystify what it is about and dive into 5 of the main Agentic AI workflows.

Let’s start defining Agentic AI.

What Is Agentic AI?

Agentic AI extends beyond the chat-based interactions of large language models (LLMs) to create autonomous agents that can make decisions, use tools, and adapt to complex tasks.

Exactly whatelements make an AI smystem “Agentic” is still open for debate. We can however observethat it often includes or combines the following:

  • Agency and workflow control: Unlike a static chatbot that only responds to prompts, an agent has a degree of autonomy. It decides     how to approach a problem and decides how to break it down into tasks. One     particular kind of agency is the ability for LLMs to control part of the     workflow, namely deciding what tasks to execute next or how many times
  • Multiple LLMs: Another element that makes an     AI system “agentic” is the presence of many agents. In other words, not a     single LLM but either multiple calls to it or most likely multiple calls to     different LLMs optimized for certain tasks.
  • Tool Usage: Finally, an element     frequently mentioned when speaking about agents is tools. The ability for     the LLMs to run code and therefore take action in the real world, for     example booking meetings or sending emails.

Let’s unpack 5 Agentic AI workflows that are commonly used and see how they work in real-world scenarios.

For context in the diagrams below blue indicate coded steps and red LLM calls.

1. Prompt Chaining

What it is:

Prompt chaining involves multiple sequential LLM calls, where the output of onebecomes the input for the next. This allows an agent to refine itsunderstanding, classify the problem, and then produce a better final response.Eventually there can be a gate where code is run on the previous response to“calculate” the next question.

Example:

PDF CVs are now surpassed, now you can have a chat bot answering and engaging people about your career, and I built one!

While building it, I faced a challenge with retrieval-augmented generation (RAG) retrieving information about my CV from a knowledge base. Initially, the RAG system pulled use-cases from my company website and mixed them into my personal CV, creating confusion.

To fix this, I implemented prompt chaining:

  1. The first LLM call classifies     the question and filters the RAG database to include or exclude     company-specific content according to the questions that were asked.
  2. The second LLM call used this     filtered context to generate a more accurate answer.

This two-step process turned a static Q&A system into a dynamic agent that could reason about the best way to handle each question. And it is definitely“agentic” as it will decide autonomously what context will be retrieved.

You can see the result at this link: scroll slightly down and wait for the chatbot to load (N.B. : The app is not always on as it would cost too much.If by chance it doesn’t load, just refresh the page).

2. Routing

What it is:

Routing in Agentic AI means programmatically directing a question or task to the right tool or model, depending on the context. Specialized LLMs can be built that are optimized for specific tasks or that respond in a specific format. A classification of the question can be made that then routes it to the most relevant LLM to use.

Example

In a project for a client’s knowledge management platform, we had to decide whether a question required a general explanation, a data retrieval action, or an external API call.

We built arouting step that mixed:

  • A classification of the question, not unlikely in the previous example about prompt chaining. In fact, routing can be seen as prompt chaining with several LLM options.
  • The information about where the user came from as users could reach the chatbot through several paths.

According to the choice, the answer was sent to a specific LLM. For example:

  • If a question about “project timelines” was detected, it routed the request to an agent equipped with a tool to connect to the projects database.
  • If the question was about “best practices”, it used an LLM equipped with contextfrom the company Confluence.

This method enabled the same chatbot to perform efficiently a broader variety of tasks.

3. Parallelization

What it is:

Parallelization involves several LLMs without necessarily giving them much decision power. In fact, parallelization involves running multiple LLM processes at once for the same task, then combining the results. This can increase robustness and reduce bias.

Example:

In a project assessing the maturity level of different answers to a questionnaire, we calculated the probabilities that a specific text matched a maturity level. Instead of relying on one LLM output, we ran three LLMs in parallel and averaged the results. This approach reduced the risk of a single outlier response skewing the outcome and produced a more robust result.

4. Orchestrator

What it is:

Orchestration is probably the most “Agentic” workflow in this list as it combines all the elements that define it. Orchestration is like routing where the router is also an LLM. The orchestrator acts as the “brain” of the system planning and assigning tasks to other LLMs directly without intermediate codedsteps. This means that it creates and coordinates the various steps, deciding when and if to chain or route.

Example:

When downloading brochures from our website, the interested person is required to write a short message and some information about themselves. We designed an orchestrator to first detect if the request was spam or legitimate. If the download request was legitimate, the system calls a second LLM to estimate it surgency and classify it into one of our business lines: Data Strategy, DataLiteracy or Enterprise Analytics. According to this decision, a further LLM or processes can be called to either :

  • Simply send the brochure
  • Send the brochure with information specific to the business line of interest
  • Additionally, if the message shows urgency, send a message on a Teams channel so that we are promptly notified and can act quickly

This sort of orchestration can enable LLMs to take on much more complex tasks.

5. Validator

What it is:

Validation is a crucial step in Agentic AI workflows that can be actually embedded in any other workflow. It involves using a second LLM to check the quality and correctness of an answer before final delivery with criteria defined in the prompt. For example, we can add context that should be respected and ask the second LLM if the answer of the first matches. However, it can also be much more generic as in “Check that the answer you received is written in a professional tone”.

Example:

We created a chatbot for a university to answer questions about lesson transcripts. To ensure the chatbot’s answers were accurate and robust, we used a two-step validation

  • The first LLM generated an answer.
  • The second LLM acted as a validator, reviewing the answer’s correctness and completeness.

If the validator flagged issues, the first LLM was asked to re-answer the question, now with the     validator’s feedback in the prompt.

This loop significantly improved the reliability of the answers.

Conclusion

Agentic AI is a powerful design pattern for building intelligent systems that go beyond static responses. By understanding and applying these concepts, you can build agents that are more helpful, trustworthy, and aligned with user needs.

If there is one message, I’d like you to take away from this article, it is that humans still have a job. While single calls to an LLM API can already be powerful, there is still much that can be done to orchestrate them. This will enable us to produce better results and tackle more complex problems. In a way, from doers,we have become designers.

But how can we then control these systems and ensure their behavior matches our expectations? Stay tuned for the next article.

Authors:

Luca Pescatore

Data Literacy
Data Literacy
Data Literacy

Lately, Agentic AI makes a lot of buzz on LinkedIn and use-cases seem to pop up everywhere. Indeed, it is a powerful technology that’s reshaping how we design and deploy artificial intelligence systems. In this article, I demystify what it is about and dive into 5 of the main Agentic AI workflows.

Let’s start defining Agentic AI.

What Is Agentic AI?

Agentic AI extends beyond the chat-based interactions of large language models (LLMs) to create autonomous agents that can make decisions, use tools, and adapt to complex tasks.

Exactly whatelements make an AI smystem “Agentic” is still open for debate. We can however observethat it often includes or combines the following:

  • Agency and workflow control: Unlike a static chatbot that only responds to prompts, an agent has a degree of autonomy. It decides     how to approach a problem and decides how to break it down into tasks. One     particular kind of agency is the ability for LLMs to control part of the     workflow, namely deciding what tasks to execute next or how many times
  • Multiple LLMs: Another element that makes an     AI system “agentic” is the presence of many agents. In other words, not a     single LLM but either multiple calls to it or most likely multiple calls to     different LLMs optimized for certain tasks.
  • Tool Usage: Finally, an element     frequently mentioned when speaking about agents is tools. The ability for     the LLMs to run code and therefore take action in the real world, for     example booking meetings or sending emails.

Let’s unpack 5 Agentic AI workflows that are commonly used and see how they work in real-world scenarios.

For context in the diagrams below blue indicate coded steps and red LLM calls.

1. Prompt Chaining

What it is:

Prompt chaining involves multiple sequential LLM calls, where the output of onebecomes the input for the next. This allows an agent to refine itsunderstanding, classify the problem, and then produce a better final response.Eventually there can be a gate where code is run on the previous response to“calculate” the next question.

Example:

PDF CVs are now surpassed, now you can have a chat bot answering and engaging people about your career, and I built one!

While building it, I faced a challenge with retrieval-augmented generation (RAG) retrieving information about my CV from a knowledge base. Initially, the RAG system pulled use-cases from my company website and mixed them into my personal CV, creating confusion.

To fix this, I implemented prompt chaining:

  1. The first LLM call classifies     the question and filters the RAG database to include or exclude     company-specific content according to the questions that were asked.
  2. The second LLM call used this     filtered context to generate a more accurate answer.

This two-step process turned a static Q&A system into a dynamic agent that could reason about the best way to handle each question. And it is definitely“agentic” as it will decide autonomously what context will be retrieved.

You can see the result at this link: scroll slightly down and wait for the chatbot to load (N.B. : The app is not always on as it would cost too much.If by chance it doesn’t load, just refresh the page).

2. Routing

What it is:

Routing in Agentic AI means programmatically directing a question or task to the right tool or model, depending on the context. Specialized LLMs can be built that are optimized for specific tasks or that respond in a specific format. A classification of the question can be made that then routes it to the most relevant LLM to use.

Example

In a project for a client’s knowledge management platform, we had to decide whether a question required a general explanation, a data retrieval action, or an external API call.

We built arouting step that mixed:

  • A classification of the question, not unlikely in the previous example about prompt chaining. In fact, routing can be seen as prompt chaining with several LLM options.
  • The information about where the user came from as users could reach the chatbot through several paths.

According to the choice, the answer was sent to a specific LLM. For example:

  • If a question about “project timelines” was detected, it routed the request to an agent equipped with a tool to connect to the projects database.
  • If the question was about “best practices”, it used an LLM equipped with contextfrom the company Confluence.

This method enabled the same chatbot to perform efficiently a broader variety of tasks.

3. Parallelization

What it is:

Parallelization involves several LLMs without necessarily giving them much decision power. In fact, parallelization involves running multiple LLM processes at once for the same task, then combining the results. This can increase robustness and reduce bias.

Example:

In a project assessing the maturity level of different answers to a questionnaire, we calculated the probabilities that a specific text matched a maturity level. Instead of relying on one LLM output, we ran three LLMs in parallel and averaged the results. This approach reduced the risk of a single outlier response skewing the outcome and produced a more robust result.

4. Orchestrator

What it is:

Orchestration is probably the most “Agentic” workflow in this list as it combines all the elements that define it. Orchestration is like routing where the router is also an LLM. The orchestrator acts as the “brain” of the system planning and assigning tasks to other LLMs directly without intermediate codedsteps. This means that it creates and coordinates the various steps, deciding when and if to chain or route.

Example:

When downloading brochures from our website, the interested person is required to write a short message and some information about themselves. We designed an orchestrator to first detect if the request was spam or legitimate. If the download request was legitimate, the system calls a second LLM to estimate it surgency and classify it into one of our business lines: Data Strategy, DataLiteracy or Enterprise Analytics. According to this decision, a further LLM or processes can be called to either :

  • Simply send the brochure
  • Send the brochure with information specific to the business line of interest
  • Additionally, if the message shows urgency, send a message on a Teams channel so that we are promptly notified and can act quickly

This sort of orchestration can enable LLMs to take on much more complex tasks.

5. Validator

What it is:

Validation is a crucial step in Agentic AI workflows that can be actually embedded in any other workflow. It involves using a second LLM to check the quality and correctness of an answer before final delivery with criteria defined in the prompt. For example, we can add context that should be respected and ask the second LLM if the answer of the first matches. However, it can also be much more generic as in “Check that the answer you received is written in a professional tone”.

Example:

We created a chatbot for a university to answer questions about lesson transcripts. To ensure the chatbot’s answers were accurate and robust, we used a two-step validation

  • The first LLM generated an answer.
  • The second LLM acted as a validator, reviewing the answer’s correctness and completeness.

If the validator flagged issues, the first LLM was asked to re-answer the question, now with the     validator’s feedback in the prompt.

This loop significantly improved the reliability of the answers.

Conclusion

Agentic AI is a powerful design pattern for building intelligent systems that go beyond static responses. By understanding and applying these concepts, you can build agents that are more helpful, trustworthy, and aligned with user needs.

If there is one message, I’d like you to take away from this article, it is that humans still have a job. While single calls to an LLM API can already be powerful, there is still much that can be done to orchestrate them. This will enable us to produce better results and tackle more complex problems. In a way, from doers,we have become designers.

But how can we then control these systems and ensure their behavior matches our expectations? Stay tuned for the next article.

Authors:

Luca Pescatore

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.