The myth of self-service BI and AI

I vividly remember my client’s expression changing, a mix of surprise and dismay, as I walked him through the dashboards I had built. He was a product manager, and these dashboards had been a game changer for his team. Before my intervention, they spent almost a week each month consolidating data and preparing reports for senior management. Afterward, this task became almost effortless.
On my lastday, the product manager, let us call him Roy, asked me to transfer knowledge so he could make small tweaks to those dashboards on his own. I hesitated, knowing that the models and calculations were complex and that Roy had never used the software before. Yet he insisted. I opened the software editor and methodically explained the data model, calculations, filters, and navigation features.

I have worked in consulting for over a decade, and this was the most disappointed I have ever seen a client, not due to the quality of my work, but because Roy genuinely believed that he and his team would be fully autonomous once I left. In that moment, he realized they would not be.
N.B. I will be using “BI” and “analytics” interchangeably in this article. I mean by this, the practice or the team responsible for, preparing both strategic and operational visual analytics assets.
The Self‑Service BI Dream
In the early 2010s, self-service BI emerged as the holy grail for business and IT teams, a sharp departure from the status quo. At the time, business users had to go through a rigid process of requirement gathering, often involving business analysts and project managers, before developers could build anything within complex, monolithic BI frameworks. After weeks (or months), the users would receive a report that might not even meet their needs — due to vague requirements or misunderstandings during development. In essence, even to get basic insights and statistics about their activity, organizations needed multiple roles and had to endure a slow, heavyweight development cycle.
The self-service BI paradigm emerged as a breath of fresh air, promising to extricate organizations from this slow, expensive cycle of traditional report creation. Ask a question, get an answer: no reliance on IT or analytics specialists, no more long waiting times.

For simple scenarios like analyzing a tidy Excel file or a handful of well‑structured tables, this self-service analytics vision can work. Push it further, however,and organizations find themselves back in a centralized “report factory” , staffed by specialists translating business needs into technical solutions.
This reflects many of my clients’ experiences: their continued lack of autonomy when it comes to data, despite significant investments in technology. I have been trying to identify why self-service analytics remains elusive, and I believe that it is in large part, due to three flawed but common assumptions that I share below:
1. Assuming Data Engineering Is Trivial
I will define data engineering as the practice of transforming raw data into a form that can be used to respond to analytical queries. I consider it a fundamental pillar of self-service analytics.

Yet many organizations implement sleek front‑end tools while neglecting the data engineering work those tools require. It’s like installing a fancy chandelier, without the basic electricity circuitry in place.
Consider a canonical sales dataset with customers, order numbers, and order dates. To classify customers as into one of the following categories, one must perform both a join and an aggregation.
- Active: at least one order in the past three month
- At risk: last order between three and six months age
- Churned: no orders for more than six months
Now, business users, sold on a “drag‑and‑drop” simplicity, expect to select a metric and see the result instantly, without having to do the underlying data engineering work.
This drag-n-drop fantasy is further echoed in the upskilling programs where training sessions rely on perfectly clean, simplified demo data. But once users return to their real-world environments, they find messy and complex datasets. Lacking expert support grounded in these actual use cases, they quickly find themselves overwhelmed and unable to apply what they have learned.

The lack of data‑engineering know-how, combined with limited time allocated for upskilling, prevents business users from moving beyond basic metrics in a genuine self‑service BI model. As a result, organizations slide back into the old report‑factory paradigm, relying on dedicated developers to translate requirements into dashboards and analytical assets.
Admittedly, a few “data champions” with prior experience or strong interest can navigate these tools and earn star status, but their success does not reflect the reality of most business users.
2. Assuming Users Will Naturally Develop Analytical Skills
The second flawed assumption is the lack of investment in user training that goes beyond technical skills to include data literacy and analytical thinking.
Self‑service BI promises that any knowledge worker can retrieve a metric without outside help. In reality, technical training should be combined with data thinking, which involves understanding that numbers require context.
Consider the following plot representing the average number of hours per day, logged by the members of a certain team. Even with a prebuilt dashboard (so no self‑service BI at play), interpretations vary widely depending on the viewer

- An HR professional spotting a decline in average daily hours worked by a team might worry about engagement issues.
- A finance analyst viewing the same trend could conclude the team is overstaffed.
- The team leader, however, might see it as a natural consequence of a recent strategic project’s heavy workload.

The ideal scenario would be that each user treats data not as the final answer, but as the starting point for further inquiry. Questions such as the following should be the norm:
- “Is this trend expected?”
- “Is six weeks too short? Should I look at a longer timeframe?”
- “What is the broader context?”
- “How does this compare to the same period last year?”
- “Could this trend be linked to other company events?”
Upskilling in data thinking means making these questions a reflex. It isn’t just about accessing data, it is about empowering users with the confidence and curiosity to explore and interpret it.
Embedding explanations and contextual annotations directly into datasets should be considered an essential feature of any true self‑service BI solution. Without fostering critical inquiry and providing context around numbers, self‑service BI will never live up to its promise.
Unfortunately, many organizations neglect this crucial step of training or coaching for their business users, which ends up making them even more dependent on a few “experts” in the company, in order to use the numbers and trends they are consulting in their dashboards.
3. Assuming Governance Can Wait
The third and most overlooked issue is governance. In most cases, the early BI rollout often focuses on a small group of power users with elevated permissions. As adoption expands, the complexity of data governance becomes evident. Let’s say our governance model limits most users to summary-level data, which is fine, until one of them needs to build a report with transaction-level detail. Suddenly we realize that one of the principles of our governance policy breaks the self-service BI paradigm.

While I acknowledge that finalizing a comprehensive data governance strategy before rollout is challenging, it is entirely possible to draft a robust first version by examining the constraints and capabilities of the tools, people, and processes. It is like designing a house’s structure carefully so that later adjustments require only furniture rearrangement, not rebuilding walls.
I strongly recommend bringing in external perspectives early on and remaining ready to refine and iterate.
Enough of BI, now we haveAI
Many believe AI will make these BI challenges obsolete. I think, however, that AI will not be a wonder drug neither, and might even be an amplifier of our mistakes and short-term thinking. Let us revisit our three flawed assumptions in the context of AI‑powered enterprise analytics
1. Assuming Data Engineering Is Obsolete
AI tools often rely on a semantic model, essentially utilizing the same data engineering pipelines discussed above. Expose raw sales data to an AI model and ask for “active customers,” and you will at best receive no answer and at worst encounter hallucinations. You must tell the AI system what “active” means in your context.
Assuming a new tool or AI feature will eliminate the need for data engineering repeats the same flawed premise as with BI.
2. Assuming Users Don’t Need Upskilling.
In a self-service AI paradigm, getting answers to business questions is bound to become faster but also more isolating. A pre-prepared dashboard or a report is likely to have some descriptions or annotations that provide context about a number or a trend. Better still, a pre-prepared dashboard has an author, a person who can be contacted to know more about the context of the numbers shown. In the absence of such supporting information and human contact, the ability of the business user to treat the information in context, and when absent, look for the context, become crucial. Without this ability, we will fall right back into a report-factory like paradigm even with AI-infused tools at our disposal.
3. Assuming Governance can be an afterthought.
In a traditional analytics scenario, it is largely straightforward to secure access to databases and passwords. This may lead to frustrated users at times, but the principal of least privilege is reasonably easy to implement.
In an AI-powered analytics solution, it will become harder to create rules that allow the AI to find answers to business questions, while preserving strict data confidentiality in the relevant areas. Continuing along our previous example of aggregate vs. raw data, think about what will prevent a user from getting transaction level information if they use a prompt that is specific enough.
Creating AI governance will in large part include data governance and it will remain a challenging, non-trivial and iterative process.
So What Now?
My claims earlier in this article might appear contrary to the following, but I believe that true self‑service BI and AI‑augmented analytics are attainable. That would require us, however, to shift from tools‑first to foundations‑first thinking. We must stop thinking of tools as silver bullets : they deliver value only when paired with the right data, skills, and governance.
My hopes and suggestions for organizations updating or replacing their analytics stack with AI are:
- Double down on robust data engineering pipelines to serve curated, ready‑to‑use datasets.
- Invest in targeted upskilling so users not only know how to operate tools but also think like analysts.
- Devise thoughtful governance to balance autonomy and control.
As for Roy, he receives ongoing support from his internal IT team and continues to rely on analytics specialists. I will always remember that moment as a reminder that no matter how slick the tool, it is the people and the process that determine whether information self-service truly succeeds.
Authors:
Fatima Soomro
I vividly remember my client’s expression changing, a mix of surprise and dismay, as I walked him through the dashboards I had built. He was a product manager, and these dashboards had been a game changer for his team. Before my intervention, they spent almost a week each month consolidating data and preparing reports for senior management. Afterward, this task became almost effortless.
On my lastday, the product manager, let us call him Roy, asked me to transfer knowledge so he could make small tweaks to those dashboards on his own. I hesitated, knowing that the models and calculations were complex and that Roy had never used the software before. Yet he insisted. I opened the software editor and methodically explained the data model, calculations, filters, and navigation features.

I have worked in consulting for over a decade, and this was the most disappointed I have ever seen a client, not due to the quality of my work, but because Roy genuinely believed that he and his team would be fully autonomous once I left. In that moment, he realized they would not be.
N.B. I will be using “BI” and “analytics” interchangeably in this article. I mean by this, the practice or the team responsible for, preparing both strategic and operational visual analytics assets.
The Self‑Service BI Dream
In the early 2010s, self-service BI emerged as the holy grail for business and IT teams, a sharp departure from the status quo. At the time, business users had to go through a rigid process of requirement gathering, often involving business analysts and project managers, before developers could build anything within complex, monolithic BI frameworks. After weeks (or months), the users would receive a report that might not even meet their needs — due to vague requirements or misunderstandings during development. In essence, even to get basic insights and statistics about their activity, organizations needed multiple roles and had to endure a slow, heavyweight development cycle.
The self-service BI paradigm emerged as a breath of fresh air, promising to extricate organizations from this slow, expensive cycle of traditional report creation. Ask a question, get an answer: no reliance on IT or analytics specialists, no more long waiting times.

For simple scenarios like analyzing a tidy Excel file or a handful of well‑structured tables, this self-service analytics vision can work. Push it further, however,and organizations find themselves back in a centralized “report factory” , staffed by specialists translating business needs into technical solutions.
This reflects many of my clients’ experiences: their continued lack of autonomy when it comes to data, despite significant investments in technology. I have been trying to identify why self-service analytics remains elusive, and I believe that it is in large part, due to three flawed but common assumptions that I share below:
1. Assuming Data Engineering Is Trivial
I will define data engineering as the practice of transforming raw data into a form that can be used to respond to analytical queries. I consider it a fundamental pillar of self-service analytics.

Yet many organizations implement sleek front‑end tools while neglecting the data engineering work those tools require. It’s like installing a fancy chandelier, without the basic electricity circuitry in place.
Consider a canonical sales dataset with customers, order numbers, and order dates. To classify customers as into one of the following categories, one must perform both a join and an aggregation.
- Active: at least one order in the past three month
- At risk: last order between three and six months age
- Churned: no orders for more than six months
Now, business users, sold on a “drag‑and‑drop” simplicity, expect to select a metric and see the result instantly, without having to do the underlying data engineering work.
This drag-n-drop fantasy is further echoed in the upskilling programs where training sessions rely on perfectly clean, simplified demo data. But once users return to their real-world environments, they find messy and complex datasets. Lacking expert support grounded in these actual use cases, they quickly find themselves overwhelmed and unable to apply what they have learned.

The lack of data‑engineering know-how, combined with limited time allocated for upskilling, prevents business users from moving beyond basic metrics in a genuine self‑service BI model. As a result, organizations slide back into the old report‑factory paradigm, relying on dedicated developers to translate requirements into dashboards and analytical assets.
Admittedly, a few “data champions” with prior experience or strong interest can navigate these tools and earn star status, but their success does not reflect the reality of most business users.
2. Assuming Users Will Naturally Develop Analytical Skills
The second flawed assumption is the lack of investment in user training that goes beyond technical skills to include data literacy and analytical thinking.
Self‑service BI promises that any knowledge worker can retrieve a metric without outside help. In reality, technical training should be combined with data thinking, which involves understanding that numbers require context.
Consider the following plot representing the average number of hours per day, logged by the members of a certain team. Even with a prebuilt dashboard (so no self‑service BI at play), interpretations vary widely depending on the viewer

- An HR professional spotting a decline in average daily hours worked by a team might worry about engagement issues.
- A finance analyst viewing the same trend could conclude the team is overstaffed.
- The team leader, however, might see it as a natural consequence of a recent strategic project’s heavy workload.

The ideal scenario would be that each user treats data not as the final answer, but as the starting point for further inquiry. Questions such as the following should be the norm:
- “Is this trend expected?”
- “Is six weeks too short? Should I look at a longer timeframe?”
- “What is the broader context?”
- “How does this compare to the same period last year?”
- “Could this trend be linked to other company events?”
Upskilling in data thinking means making these questions a reflex. It isn’t just about accessing data, it is about empowering users with the confidence and curiosity to explore and interpret it.
Embedding explanations and contextual annotations directly into datasets should be considered an essential feature of any true self‑service BI solution. Without fostering critical inquiry and providing context around numbers, self‑service BI will never live up to its promise.
Unfortunately, many organizations neglect this crucial step of training or coaching for their business users, which ends up making them even more dependent on a few “experts” in the company, in order to use the numbers and trends they are consulting in their dashboards.
3. Assuming Governance Can Wait
The third and most overlooked issue is governance. In most cases, the early BI rollout often focuses on a small group of power users with elevated permissions. As adoption expands, the complexity of data governance becomes evident. Let’s say our governance model limits most users to summary-level data, which is fine, until one of them needs to build a report with transaction-level detail. Suddenly we realize that one of the principles of our governance policy breaks the self-service BI paradigm.

While I acknowledge that finalizing a comprehensive data governance strategy before rollout is challenging, it is entirely possible to draft a robust first version by examining the constraints and capabilities of the tools, people, and processes. It is like designing a house’s structure carefully so that later adjustments require only furniture rearrangement, not rebuilding walls.
I strongly recommend bringing in external perspectives early on and remaining ready to refine and iterate.
Enough of BI, now we haveAI
Many believe AI will make these BI challenges obsolete. I think, however, that AI will not be a wonder drug neither, and might even be an amplifier of our mistakes and short-term thinking. Let us revisit our three flawed assumptions in the context of AI‑powered enterprise analytics
1. Assuming Data Engineering Is Obsolete
AI tools often rely on a semantic model, essentially utilizing the same data engineering pipelines discussed above. Expose raw sales data to an AI model and ask for “active customers,” and you will at best receive no answer and at worst encounter hallucinations. You must tell the AI system what “active” means in your context.
Assuming a new tool or AI feature will eliminate the need for data engineering repeats the same flawed premise as with BI.
2. Assuming Users Don’t Need Upskilling.
In a self-service AI paradigm, getting answers to business questions is bound to become faster but also more isolating. A pre-prepared dashboard or a report is likely to have some descriptions or annotations that provide context about a number or a trend. Better still, a pre-prepared dashboard has an author, a person who can be contacted to know more about the context of the numbers shown. In the absence of such supporting information and human contact, the ability of the business user to treat the information in context, and when absent, look for the context, become crucial. Without this ability, we will fall right back into a report-factory like paradigm even with AI-infused tools at our disposal.
3. Assuming Governance can be an afterthought.
In a traditional analytics scenario, it is largely straightforward to secure access to databases and passwords. This may lead to frustrated users at times, but the principal of least privilege is reasonably easy to implement.
In an AI-powered analytics solution, it will become harder to create rules that allow the AI to find answers to business questions, while preserving strict data confidentiality in the relevant areas. Continuing along our previous example of aggregate vs. raw data, think about what will prevent a user from getting transaction level information if they use a prompt that is specific enough.
Creating AI governance will in large part include data governance and it will remain a challenging, non-trivial and iterative process.
So What Now?
My claims earlier in this article might appear contrary to the following, but I believe that true self‑service BI and AI‑augmented analytics are attainable. That would require us, however, to shift from tools‑first to foundations‑first thinking. We must stop thinking of tools as silver bullets : they deliver value only when paired with the right data, skills, and governance.
My hopes and suggestions for organizations updating or replacing their analytics stack with AI are:
- Double down on robust data engineering pipelines to serve curated, ready‑to‑use datasets.
- Invest in targeted upskilling so users not only know how to operate tools but also think like analysts.
- Devise thoughtful governance to balance autonomy and control.
As for Roy, he receives ongoing support from his internal IT team and continues to rely on analytics specialists. I will always remember that moment as a reminder that no matter how slick the tool, it is the people and the process that determine whether information self-service truly succeeds.
Authors:
Fatima Soomro