Be who you are.

Pim launched an AI dashboard.

We show how AI shifts power and responsibility.
We ask who is truly at the wheel.
We argue that AI should support and that people remain the owners.

Pim launched an AI dashboard.

Pim launched an AI dashboard.

AI silently shifts the center of gravity of power, trust, and responsibility.

This blog explores what truly changes when we let dashboards decide—and whether we, as humans, are still at the wheel.

It seemed harmless, until we realized that no one knew who was responsible when the system got it wrong.

It is a scene I see in more and more places. AI is introduced as convenient support—a bit of time saved here, slightly better predictions there—but beneath the surface something else shifts. Not in the technology, but in power, trust, and decision-making. The question is less: what can this system do? and much more: what does this system do to how we as humans relate to one another?

AI is not a smart intern

We like to talk about AI as if it were a kind of digital intern: useful for texts, summaries, planning. Something that works “next to us,” neatly within the lines. But in practice, AI takes a seat at the table in three much more fundamental places.

First, in the formulation of the question.

Before a single line of code runs, it has already been decided which KPIs count, who qualifies as a “risk,” who as “high potential.” That seems neutral, but it determines which view of humanity, which norm, and which ambition we pour into the system. Everything that follows tracks that line.

Next, in the interpretation of reality.

Models determine which signals are amplified and which disappear into the noise. What is easy to measure gains weight; what does not fit the model is easily buried. The experience of an employee, the context of a neighborhood, the history of a client relationship—they disappear from view if the model cannot process them.

And finally, in the distribution of responsibility.

As soon as things become tense, suddenly “the system” stands between people. “That’s what the dashboard says.” “The algorithm sees an increased risk here.” People hide behind that more quickly than they realize. Not out of ill will, but because it is comfortable: the difficult choice is delegated to a graph.

We often think AI mainly makes things more efficient.

In reality, it reorganizes who is allowed to observe, interpret, and decide.

Information sovereignty: who is really at the wheel?

The core concept I use in organizations is information sovereignty.

By that I mean something very concrete:

  • Who determines which information counts?
  • Who is allowed to challenge the model—and, without repercussions, say: “this is not correct for this situation”?
  • Who is accountable if the system is structurally wrong?

In many organizations I see three recurring patterns.

The first I call the model as oracle.

Dashboards are treated as truth; anyone who doubts feels like a nuisance. The nuance “it helps us to look better” silently shifts to “it has said that…”.

The second pattern is “the system says so” as a shield.

Difficult conversations about performance, risks, or behavior are outsourced to scores and graphs. It is less confrontational to refer to a model than to put into words yourself what you see and feel.

The third pattern is siloed responsibility.

IT implements, procurement contracts, management steers on outcomes, teams are allowed to “experience” what it does. Everyone is involved, but no one is truly the owner of the question of what this does to people, culture, and professional space.

At that moment, AI is no longer a tool, but the new center of gravity in the organization. Decisions, conversations, and careers begin to organize themselves around the system. And the remarkable thing is: rarely has anyone explicitly said “yes” to that.

Leadership in an AI era

For me, true leadership in an AI era is therefore less about “learning to work with new tools” and much more about ownership of the filters of your organizational brain.

Every organization has such a brain: a way in which information enters, is weighed, shared, and translated into decisions. Where AI strengthens that brain, it becomes crucial which filters we build into it.

Then questions come to the table such as:

  • Which questions do we actually place in the system? What do we not ask?
  • Which assumptions underlie our models? About people, performance, risk, success?
  • Which space do we create to refute the system—and who actually has the position to do so?

That is where the real tipping point lies.

Not in whether a tool becomes even smarter, but in whether we as humans and as leaders keep our backs straight when the system says something different from our conscience, our experience, or our professional intuition.

Three questions that can already set something in motion today

You do not need a new policy document to begin with this. Three questions in your next executive meeting can already reveal a great deal:

  • Where in our organization are we already working with AI or advanced algorithms, without calling it that?
    Think of scoring models, risk indicators, “smart” selection or planning.
  • In which places do we implicitly use “the system” as an excuse not to have a difficult conversation?
    Where do we let a dashboard speak where a human being should actually speak?
  • Where must we make explicit: “here AI is a tool, but responsibility remains with us, as people”?
    And do we dare to say that out loud to employees, customers, citizens?

These are not technical questions.

They are questions about culture, power, and courage.

Why I built a Quick Scan

For precisely that reason, I developed a Quick Scan around AI, leadership, and culture. Not to add yet another tool to the pile, but to make visible where AI is already shifting within your organization today, without being named as such.

The scan does not look at which software you use, but at what happens to:

  • psychological safety and trust,
  • patterns of responsibility and blame,
  • space for reflection, doubt, and dissent,
  • the way identity, value, and view of humanity are embedded in systems.

In other words: it does not look at the screen, but at the undercurrent behind it.

The question ultimately is not whether AI lands in your organization.

That is already happening—via suppliers, platforms, HR, finance, customer contact.

The real question is: under which conditions do you allow it in—and who safeguards those conditions?

If we keep postponing that question, an organization emerges in which people adapt to the system, instead of the other way around. And then the dashboard has long been live, but there is no one left who truly feels responsible when it gets it wrong.

René de Baaij