{"id":2264,"date":"2025-11-26T08:57:00","date_gmt":"2025-11-26T08:57:00","guid":{"rendered":"https:\/\/dbvp.nl\/?p=2264"},"modified":"2026-05-06T08:58:15","modified_gmt":"2026-05-06T08:58:15","slug":"pim-launched-an-ai-dashboard","status":"publish","type":"post","link":"https:\/\/dbvp.nl\/en\/pim-launched-an-ai-dashboard\/","title":{"rendered":"Pim launched an AI dashboard."},"content":{"rendered":"<p>We show how AI shifts power and responsibility.<br>We ask who is truly at the wheel.<br>We argue that AI should support and that people remain the owners.<\/p>\n\n\n\n<p><strong>Pim launched an AI dashboard.<\/strong><\/p>\n\n\n\n<p>Pim launched an AI dashboard.<\/p>\n\n\n\n<p>AI silently shifts the center of gravity of power, trust, and responsibility.<\/p>\n\n\n\n<p>This blog explores what truly changes when we let dashboards decide\u2014and whether we, as humans, are still at the wheel.<\/p>\n\n\n\n<p>It seemed harmless, until we realized that no one knew who was responsible when the system got it wrong.<\/p>\n\n\n\n<p>It is a scene I see in more and more places. AI is introduced as convenient support\u2014a bit of time saved here, slightly better predictions there\u2014but beneath the surface something else shifts. Not in the technology, but in power, trust, and decision-making. The question is less: what can this system do? and much more: what does this system do to how we as humans relate to one another?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">AI is not a smart intern<\/h3>\n\n\n\n<p>We like to talk about AI as if it were a kind of digital intern: useful for texts, summaries, planning. Something that works \u201cnext to us,\u201d neatly within the lines. But in practice, AI takes a seat at the table in three much more fundamental places.<\/p>\n\n\n\n<p>First, in the formulation of the question.<\/p>\n\n\n\n<p>Before a single line of code runs, it has already been decided which KPIs count, who qualifies as a \u201crisk,\u201d who as \u201chigh potential.\u201d That seems neutral, but it determines which view of humanity, which norm, and which ambition we pour into the system. Everything that follows tracks that line.<\/p>\n\n\n\n<p>Next, in the interpretation of reality.<\/p>\n\n\n\n<p>Models determine which signals are amplified and which disappear into the noise. What is easy to measure gains weight; what does not fit the model is easily buried. The experience of an employee, the context of a neighborhood, the history of a client relationship\u2014they disappear from view if the model cannot process them.<\/p>\n\n\n\n<p>And finally, in the distribution of responsibility.<\/p>\n\n\n\n<p>As soon as things become tense, suddenly \u201cthe system\u201d stands between people. \u201cThat\u2019s what the dashboard says.\u201d \u201cThe algorithm sees an increased risk here.\u201d People hide behind that more quickly than they realize. Not out of ill will, but because it is comfortable: the difficult choice is delegated to a graph.<\/p>\n\n\n\n<p>We often think AI mainly makes things more efficient.<\/p>\n\n\n\n<p>In reality, it reorganizes who is allowed to observe, interpret, and decide.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Information sovereignty: who is really at the wheel?<\/h3>\n\n\n\n<p>The core concept I use in organizations is information sovereignty.<\/p>\n\n\n\n<p>By that I mean something very concrete:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Who determines which information counts?<\/li>\n\n\n\n<li>Who is allowed to challenge the model\u2014and, without repercussions, say: \u201cthis is not correct for this situation\u201d?<\/li>\n\n\n\n<li>Who is accountable if the system is structurally wrong?<\/li>\n<\/ul>\n\n\n\n<p>In many organizations I see three recurring patterns.<\/p>\n\n\n\n<p>The first I call the model as oracle.<\/p>\n\n\n\n<p>Dashboards are treated as truth; anyone who doubts feels like a nuisance. The nuance \u201cit helps us to look better\u201d silently shifts to \u201cit has said that\u2026\u201d.<\/p>\n\n\n\n<p>The second pattern is \u201cthe system says so\u201d as a shield.<\/p>\n\n\n\n<p>Difficult conversations about performance, risks, or behavior are outsourced to scores and graphs. It is less confrontational to refer to a model than to put into words yourself what you see and feel.<\/p>\n\n\n\n<p>The third pattern is siloed responsibility.<\/p>\n\n\n\n<p>IT implements, procurement contracts, management steers on outcomes, teams are allowed to \u201cexperience\u201d what it does. Everyone is involved, but no one is truly the owner of the question of what this does to people, culture, and professional space.<\/p>\n\n\n\n<p>At that moment, AI is no longer a tool, but the new center of gravity in the organization. Decisions, conversations, and careers begin to organize themselves around the system. And the remarkable thing is: rarely has anyone explicitly said \u201cyes\u201d to that.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Leadership in an AI era<\/h3>\n\n\n\n<p>For me, true leadership in an AI era is therefore less about \u201clearning to work with new tools\u201d and much more about ownership of the filters of your organizational brain.<\/p>\n\n\n\n<p>Every organization has such a brain: a way in which information enters, is weighed, shared, and translated into decisions. Where AI strengthens that brain, it becomes crucial which filters we build into it.<\/p>\n\n\n\n<p>Then questions come to the table such as:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Which questions do we actually place in the system? What do we\u00a0<em>not<\/em>\u00a0ask?<\/li>\n\n\n\n<li>Which assumptions underlie our models? About people, performance, risk, success?<\/li>\n\n\n\n<li>Which space do we create to refute the system\u2014and who actually has the position to do so?<\/li>\n<\/ul>\n\n\n\n<p>That is where the real tipping point lies.<\/p>\n\n\n\n<p>Not in whether a tool becomes even smarter, but in whether we as humans and as leaders keep our backs straight when the system says something different from our conscience, our experience, or our professional intuition.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Three questions that can already set something in motion today<\/h3>\n\n\n\n<p>You do not need a new policy document to begin with this. Three questions in your next executive meeting can already reveal a great deal:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Where in our organization are we already working with AI or advanced algorithms, without calling it that?<br>Think of scoring models, risk indicators, \u201csmart\u201d selection or planning.<\/li>\n\n\n\n<li>In which places do we implicitly use \u201cthe system\u201d as an excuse not to have a difficult conversation?<br>Where do we let a dashboard speak where a human being should actually speak?<\/li>\n\n\n\n<li>Where must we make explicit: \u201chere AI is a tool, but responsibility remains with us, as people\u201d?<br>And do we dare to say that out loud to employees, customers, citizens?<\/li>\n<\/ul>\n\n\n\n<p>These are not technical questions.<\/p>\n\n\n\n<p>They are questions about culture, power, and courage.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Why I built a Quick Scan<\/h3>\n\n\n\n<p>For precisely that reason, I developed a Quick Scan around AI, leadership, and culture. Not to add yet another tool to the pile, but to make visible where AI is already shifting within your organization today, without being named as such.<\/p>\n\n\n\n<p>The scan does not look at which software you use, but at what happens to:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>psychological safety and trust,<\/li>\n\n\n\n<li>patterns of responsibility and blame,<\/li>\n\n\n\n<li>space for reflection, doubt, and dissent,<\/li>\n\n\n\n<li>the way identity, value, and view of humanity are embedded in systems.<\/li>\n<\/ul>\n\n\n\n<p>In other words: it does not look at the screen, but at the undercurrent behind it.<\/p>\n\n\n\n<p>The question ultimately is not&nbsp;<em>whether<\/em>&nbsp;AI lands in your organization.<\/p>\n\n\n\n<p>That is already happening\u2014via suppliers, platforms, HR, finance, customer contact.<\/p>\n\n\n\n<p>The real question is: under which conditions do you allow it in\u2014and who safeguards those conditions?<\/p>\n\n\n\n<p>If we keep postponing that question, an organization emerges in which people adapt to the system, instead of the other way around. And then the dashboard has long been live, but there is no one left who truly feels responsible when it gets it wrong.<\/p>\n\n\n\n<p><strong>Ren\u00e9 de Baaij<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>We show how AI shifts power and responsibility.<br \/>\nWe ask who is truly at the wheel.<br \/>\nWe argue that AI should support and that people remain the owners.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[81],"tags":[],"class_list":["post-2264","post","type-post","status-publish","format-standard","hentry","category-english"],"_links":{"self":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/comments?post=2264"}],"version-history":[{"count":1,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2264\/revisions"}],"predecessor-version":[{"id":2265,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2264\/revisions\/2265"}],"wp:attachment":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/media?parent=2264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/categories?post=2264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/tags?post=2264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}