Leadership - Culture - Organisation

DBVP works precisely where people, organisations, and AI intersect. We approach organisations as living systems, in which strategy, structure, and dashboards are inseparable from undercurrents of fear, ambition, loyalty, and power. Rather than pursuing cosmetic change, we seek transformation from within—through leadership, culture, teams, and professional practice. We combine a psychodynamic and systemic lens with interventions in the midst of real work. We do not see AI as a neutral tool, but as part of how meaning, influence, and decision-making are distributed. Our role is that of guide, mirror, and constructive challenger; ownership always remains with you and your organisation. If you want to sense what this could look like in your practice, read on.

DBVP beweegt zich precies daar waar mens, organisatie en technologie elkaar raken. Alles wat we doen, gaat over één rode draad: transformatie van binnenuit in een tijd waarin AI en digitalisering niet langer bijzaak zijn, maar onderdeel van het organisatieweefsel.

We werken op verschillende thema’s – organisatieontwikkeling, leiderschap, cultuur, teams, professionele en persoonlijke ontwikkeling – maar in de kern is het steeds dezelfde vraag: hoe wordt een organisatie, een leider, een team, een professional een volgende, meer volwassen versie van zichzelf, zonder zich te verliezen in oppervlakkige verandering of technologische reflexen?

We zien organisaties als levende systemen. Strategie, structuur, governance, processen, cultuur én digitale infrastructuur zijn geen losse elementen, maar uitdrukking van onderliggende spanningen, overtuigingen en loyaliteiten. Wat aan de oppervlakte zichtbaar is – reorganisaties, nieuwe besturing, leiderschapsprogramma’s, cultuurtrajecten, AI-implementaties – is altijd verbonden met wat onder de oppervlakte speelt: angst, rouw om het oude, behoefte aan controle, verlangen om erbij te horen, de vraag wie mag spreken en wie niet.

Daarom werken we psychodynamisch én systemisch. Psychodynamisch betekent dat we belangstelling hebben voor de innerlijke logica van gedrag: de verdedigingsmechanismen, projecties, overdrachtslijnen en loyaliteiten die maken dat mensen en systemen doen wat ze doen. Systemisch betekent dat we altijd kijken naar de plek in het geheel: rol, mandaat, historie, krachtenveld, de soms scherpe randen van governance en toezicht. Individueel gedrag lezen we nooit los van het systeem waarin het ontstaat.

HUMAN-AI is voor ons geen apart thema, maar een lens die overal doorheen loopt. AI en data bepalen steeds vaker wie wat ziet, wie mag weten, wie mag beslissen en wat als “logisch” of “waar” wordt ervaren. Technologie is daarmee ook drager van macht, normen en blinde vlekken. In ons werk kijken we dus niet alleen naar mensen en structuren, maar ook naar de systemen die het gesprek, het tempo en de keuzes mede sturen. AI gebruiken we als hulpmiddel én als spiegel: wat zegt onze technologie over wat hier werkelijk belangrijk is?

Onze interventies volgen grofweg dezelfde beweging: kijken, duiden, kiezen en oefenen. We beginnen met kijken: diepte-interviews, observaties van belangrijke overleggen, analyse van het krachtenveld en van het digitale landschap, werken met casuïstiek waar het echt schuurt. Onderzoek is bij ons nooit neutraal; al vragend verandert er iets in hoe mensen naar zichzelf en hun organisatie kijken.

Vervolgens duiden we. We maken hypothesen over wat er in de onderstroom speelt, over de systemische logica van patronen, over de rol die technologie daarin speelt. Die spiegelen we terug in gesprekken met bestuur, leiders, teams en professionals. Niet als diagnose van buitenaf, maar als uitnodiging om samen preciezer te kijken: wat herkennen jullie, wat niet, wat durven we nu wél te zeggen?

Daaruit ontstaat keuze. Welke beweging is nu nodig? Waar begint dit: bij bestuur en governance, bij het leiderschapscollectief, bij één of enkele sleutelteams, bij professionals in het hart van het primaire proces? We helpen om te focussen: liever minder interventies die diep ingrijpen in het echte werk, dan een breed palet aan losse activiteiten naast de lijn.

Tot slot gaan we oefenen – altijd in de praktijk. Werkconferenties rond echte strategische dilemma’s. Leiderschapslabs waarin de groep zelf materiaal is. Teamsessies rond concrete conflicten, fouten of doorbraken. Professionele en persoonlijke ontwikkeltrajecten die direct gekoppeld zijn aan de agenda van morgen. Sessies waarin we samen onder de motorkap van systemen en AI kijken. Korte leerloops na spannende momenten, in plaats van eenmalige heidagen.

In al dat werk nemen wij een eigen, heldere positie in: gids, spiegel, tegenspreker. We brengen taal, structuur, scherpte en bedding. Maar het eigenaarschap blijft altijd daar waar het hoort: bij de organisatie, bij de leiders, bij de teams en bij de mensen zelf. Transformatie van binnenuit laat zich niet uitrollen; zij vraagt om mensen en systemen die zichzelf onder ogen durven zien – juist in een tijd waarin technologie uitnodigt tot versnellen en verdoven.

That is the core of DBVP: slowing down together in the right places, so that people, organisation, and AI can move into a next, more mature ordering.

AI and Change: What Are We Really Talking About?

Legitimacy begins with intent. The model is not central; the central question is which public values we want to optimise, which people are affected, and who intervenes when it starts to chafe. That requires clear boundaries around what remains fundamentally human, and making visible how judgements are formed. In this way, doing justice gains a rhythm that can be followed.

Beneath that lies the infrastructure that makes public space possible. Open standards, shared data spaces, independent assessment, and real options to stop or switch give technology a proper foundation. Procurement becomes a carrier of values when explainability, verifiability, and reversibility are not an appendix but the starting point. Healthy countervailing power—including towards one’s own systems—is what demonstrates authority.

Governing is relational work. The state, provinces, municipalities, and implementing agencies form a single fabric in which errors can spread, but learning can as well. Fixed moments of joint reflection make outcomes, exceptions, and moral friction visible. Not to point fingers, but to take responsibility and, where necessary, to stop. Public value does not emerge behind closed doors. Citizens must be able to understand, question, and challenge what systems do. That requires more than a register; it requires accessible language, timely involvement, and a place where repair can genuinely begin. Democracy thus becomes a practice, not merely a procedure.

Professionals, public leaders, and supervisors need a new kind of literacy—one that does not require mastering every detail of the code, but does demand sensitivity to assumptions, data quality, bias, and model drift. Judgement is practised through case-based work and dissent, so that the human measure becomes a skill rather than a slogan. Between experimentation and precaution lies the art of discernment. Where risks are small, innovation can be given room within clear guardrails. Where dignity or legal standing is at stake, slowing down deserves priority and countervailing forces are organised explicitly. In this way, speed gains a foundation that remains human.

What remains is an invitation to have the conversations where it truly rubs: about power, about pace, about responsibility that cannot be delegated to machines. Do we make visible where we accelerate, and name honestly where we slow down? In those choices, public administration shows its stature—transformation from within, with society as co-designer.

AI is reshaping the day-to-day work of professionals—and the inner life of every person. It is not only tasks that shift; self-image and craft, the experience of time, and the way we carry responsibility move with it. We now work in a field where people, machines, and meaning continuously redraw one another—on the screen, in the team, and in the undercurrent of what we consider good and right.

For the professional, the rhythm changes. Routine may become lighter, but judgement becomes heavier. Models offer suggestions; you sign for the consequences. That both clarifies and tempts: it feels good when something thinks along with you, and unsettling when it starts to decide with you. The core of the work shifts towards shaping the relationship—between speed and care, between convenience and meaning, between what the model can do and what you are morally willing to do.

Skills take on a different cut. Alongside domain expertise comes model literacy: recognising assumptions, sensing data quality, questioning outputs, spotting drift. It is less about code and more about an intuition for context. At the same time, impoverishment lurks: if the system proposes more and more, you exercise your own muscle less often. Deskilling is not a technical risk but an identity risk: who are you when your tools appear to take over the thinking?

Autonomy becomes relational. AI becomes a colleague, a mirror, and sometimes an invisible foreman. Dashboards steer priorities, recommendations colour attention, and the pressure to comply is subtle but real. The question “am I allowed to deviate?” becomes more important than “am I allowed to use it?”. Ownership then means being able to say why you do or do not go along—and carrying what that requires in time, explanation, and courage.

The emotional layer shifts as well. Professionals may feel relief (“finally, help”), shame (“can I still do it myself?”), mistrust (“based on what?”), and sometimes grief for a craft that is disappearing. That palette belongs to transformation from within. It calls for teams that do not smooth tension away, but make it workable—by normalising conversations about power, meaning, and boundaries.

The relationship with clients, students, or citizens shifts with it. Interaction may become more consistent, but also more impersonal when the basis of a judgement can no longer be explained. Trust remains human work. Explanation in plain language, clear routes for remedy when things go wrong, and explicitly marking decisions that remain fundamentally human form the backbone of legitimacy.

Boundaries are being redrawn. The data we leave behind as “professional exhaust” feeds systems that then assess us in return. Privacy and dignity are not side issues, but conditions for being able to think freely. The right to pause, to doubt, and to be incomplete—a measure of professional “opacity”—keeps work human.

Health lives in rhythm. Speed is seductive; attention is finite. Brief pauses for reflection, assumptions made explicit, and moments of dissent are not a luxury but hygiene. That is how technology becomes a foundation rather than a wildfire, and quality remains a virtue rather than a chart.

For the individual behind the professional, it ultimately comes down to compass. AI makes visible where we choose convenience over meaning, where we want to outsource responsibility, and where we find the courage to set boundaries. The question is not whether you use AI, but how you relate to it: curious and critical, open yet bounded, willing to learn—and willing to refuse.

Perhaps it starts here: dare to say out loud which parts of your work you never want to automate fully. Be clear about the values you want to protect when things need to move fast. And seek the conversation where it rubs—within yourself, with colleagues, and with those you serve. In that weave, AI finds its place as a tool with character, and human judgement remains a public good.