{"id":2272,"date":"2026-01-19T09:03:00","date_gmt":"2026-01-19T09:03:00","guid":{"rendered":"https:\/\/dbvp.nl\/?p=2272"},"modified":"2026-05-06T09:04:30","modified_gmt":"2026-05-06T09:04:30","slug":"ai-as-mirror-and-engine-governing-from-within-in-an-algorithmic-era","status":"publish","type":"post","link":"https:\/\/dbvp.nl\/en\/ai-as-mirror-and-engine-governing-from-within-in-an-algorithmic-era\/","title":{"rendered":"AI as mirror and engine \u2013 governing from within in an algorithmic era"},"content":{"rendered":"<p>We argue that AI amplifies what already exists in power and behavior.<br>We emphasize that governance and human counterforce are essential.<br>We advocate for simple, traceable craftsmanship.<\/p>\n\n\n\n<p><strong>20260212 AI as Mirror and Engine \u2013 governing from within in an algorithmic age<\/strong><\/p>\n\n\n\n<p><em>Ren\u00e9 de Baaij, Groesbeek, 12 february 2025<\/em><\/p>\n\n\n\n<p><strong>Reader\u2019s Guide<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>For whom:<\/strong>\u00a0RvB (Executive Board) \/ RvC (Supervisory Board), MT (management team) and public sector leaders; focus on strategy, legitimacy and duties of care.<\/li>\n\n\n\n<li><strong>How to read:<\/strong>\u00a0sections 1\u20137 build the argument (human &amp; system); sections 9\u201311 provide the instrumentarium. Read the Prologue (section 0) for context; for implementation feel free to jump to 9\u201311 and leaf back for depth.<\/li>\n\n\n\n<li><strong>What you\u2019ll take away:<\/strong>\u00a0three decision lines (value, risk, legitimacy), one cadence (monitor\u2013challenge\u2013audit) and five artefacts (model card, logbook, risk file, user guide, exit plan).<\/li>\n<\/ul>\n\n\n\n<p><strong>Prologue \u2013 The room, the screen, the gaze<\/strong><\/p>\n\n\n\n<p>It\u2019s early. The screen lights up before the meeting room fills. The first question of the day is not a greeting but a prompt. An answer rolls out that holds together better than yesterday\u2019s notes. Someone sighs with relief, someone else frowns, a third feels inexplicable irritation. In those three micro\u2011reactions the playing field is exposed: AI as promise, as threat, as mirror. What happens here is not merely technical or procedural; it is also psychological, relational, managerial and legal. It touches how you set direction, how teams make meaning and how the organisation carries responsibility.<\/p>\n\n\n\n<p>This longread connects two lenses that rarely meet in a single story: the psychodynamic, relational and humanistic foundation&nbsp;<strong>and<\/strong>&nbsp;the managerial, business and legal perspective. We look inward and outward at the same time, at undercurrent and uppercurrent. At what AI does with people, and what people do with AI; at steering, structure and accountability.<\/p>\n\n\n\n<p>The core message is simple and sharp:&nbsp;<strong>AI does not change humans into another species; it amplifies what is already there \u2013 and makes the consequences administratively and legally explicit.<\/strong><\/p>\n\n\n\n<p><strong>Core statements (with brief explanations)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>AI amplifies what exists.<\/strong><br>It strengthens patterns, speeds up rhythms and makes implicit norms explicit in code.<br>That yields clarity, and also enlarged blind spots if counter\u2011force is missing.<\/li>\n\n\n\n<li><strong>Neutrality is an illusion without governance.<\/strong><br>A model is not an arbiter but a design choice with assumptions, data and thresholds.<br>Secure revocability and contestability, or \u2018efficiency\u2019 becomes an alibi.<\/li>\n\n\n\n<li><strong>Relationship precedes technology.<\/strong><br>What the system does to people and people do to the system shapes culture.<br>That requires dialogue, exception and repair as structural functions.<\/li>\n\n\n\n<li><strong>Legitimacy is evidenced action.<\/strong><br>Care counts only when it is traceable in logbooks, cards and decisions.<br>Explanation and remediation are not side issues; they are part of the product.<\/li>\n<\/ul>\n\n\n\n<p><strong>1. Inner world and uppercurrent \u2013 projections onto the algorithm<\/strong><\/p>\n\n\n\n<p>In classic hierarchies omnipotence, certainty and omniscience were projected onto the leader. Now the algorithm slides into that role. It appears as the new \u201cobject\u201d: smart, fast, apparently neutral. That appearance of neutrality makes it attractive as a carrier of idealisation and denial. \u201cThe model says so\u201d has grown into a modern defence mechanism \u2013 a way to avoid shame, doubt or moral burden.<\/p>\n\n\n\n<p>Psychodynamically this is not aberrant but predictable: transference and counter\u2011transference occur as much&nbsp;<strong>between people and systems<\/strong>&nbsp;as between people. We speak to the system, but really to our need for certainty. We listen to the score, but really to our fear of falling short.<\/p>\n\n\n\n<p>Administratively this carries through. Value creation shifts from individual expertise to the&nbsp;<strong>configuration<\/strong>&nbsp;of data, processes, models and governance. Legally the question shifts with it: not \u201cwho did this?\u201d, but \u201chow is the system designed, who owns it, which duties of care applied and is remediation possible?\u201d<\/p>\n\n\n\n<p>In terms of motivation this touches autonomy, competence and relatedness (Self\u2011Determination Theory; see note 1).<\/p>\n\n\n\n<p>Three simpler, therefore harder questions belong to every AI file from now on:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>What does this system do to our roles and relationships?<\/strong>\u00a0Who gets a voice, who falls silent?<\/li>\n\n\n\n<li><strong>Where do we use data and models to avoid contact rather than deepen it?<\/strong><\/li>\n\n\n\n<li><strong>How do we stay rooted in empathy and proportionality precisely when decisions are being \u2018optimised\u2019?<\/strong><\/li>\n<\/ol>\n\n\n\n<p>\u2800<em>Those who avoid them gain speed without direction, and compliance without legitimacy.<\/em><\/p>\n\n\n\n<p><strong>2. AI as system knot \u2013 patterns, power and variety<\/strong><\/p>\n\n\n\n<p>AI is not a loose tool; it is a&nbsp;<strong>system knot<\/strong>&nbsp;that creates new feedback loops, dependencies and power structures. From systems thinking and complexity theory the core question shifts: not whether AI \u201cworks\u201d, but&nbsp;<strong>which patterns it amplifies or suppresses<\/strong>.<\/p>\n\n\n\n<p><em>Requisite variety<\/em>&nbsp;becomes concrete (see note 8). Organisations that want to handle complex reality need variety in perspectives. Many models do the opposite: they score, sort and normalise. That is efficient, but also a source of&nbsp;<strong>cultural drift<\/strong>&nbsp;toward conformity and risk aversion. Less deviation, less dissent, less creativity \u2013 with a managerial side effect: blind spots that only surface in incidents and complaints.<\/p>\n\n\n\n<p>In systemic language&nbsp;<strong>AI encodes implicit norms<\/strong>. Power and habitus shift to datasets, features and thresholds. Without attention you entrench exclusion as \u201cobjective logic\u201d. That is why governance is not primarily a document but a&nbsp;<strong>relational design<\/strong>: who defines the data, who selects the models, who can make exceptions and who can overrule? This perspective is sociomaterial in nature (see note 6) and aligns with complex responsive processes (see note 7).<\/p>\n\n\n\n<p>For you as a leader a simple rule with hard consequences follows:&nbsp;<strong>AI programmes are change programmes.<\/strong>&nbsp;They intervene in roles, processes, identity and accountability. Those who run them as IT (information technology) projects get technical deliverables and social collateral damage.<\/p>\n\n\n\n<p><strong>3. Strategy and operations \u2013 speed, meaning and yardsticks<\/strong><\/p>\n\n\n\n<p>AI can lift productivity, shorten lead times and make quality more predictable. But without a&nbsp;<strong>target architecture<\/strong>&nbsp;it dissolves into loose pilots and demo wins. Strategically the order is:&nbsp;<strong>problem \u2192 value \u2192 risk \u2192 design<\/strong>. First define clearly which business problem and which value formula,&nbsp;<em>then<\/em>&nbsp;tools. Otherwise technology becomes solution and the human becomes problem.<\/p>\n\n\n\n<p>Operationally AI requires a visible&nbsp;<strong>chain design<\/strong>&nbsp;from question to effect. Intake, design, validation, go\u2011live, monitoring, retraining and sunsetting form one cadence. Thresholds and retrain criteria are set in advance. Prompts, features, hyperparameters and versions become&nbsp;<strong>assets with ownership<\/strong>. You see: this is no hobby corner; this is service management for decision\u2011making.<\/p>\n\n\n\n<p>Learning is the weak spot. AI excels at&nbsp;<strong>single\u2011loop<\/strong>: getting better at what we already do.&nbsp;<strong>Double\u2011loop<\/strong>&nbsp;\u2013 whether we measure the right things and use the right success criteria \u2013 remains a human and organisational process. If that layer is skipped, blindness grows with high precision. The result: beautiful dashboards, wrong decisions (Kolb; see note 3; Argyris &amp; Sch\u00f6n; see note 4; Mezirow; see note 5).<\/p>\n\n\n\n<p>Therefore learning organisations put three practices in place: (1)&nbsp;<strong>reflection on outcomes<\/strong>&nbsp;\u2013 not only \u201cis it correct?\u201d, but \u201cwhat does it mean?\u201d; (2)&nbsp;<strong>dialogical decision\u2011making<\/strong>&nbsp;\u2013 data as conversation starter, not as arbiter; (3)&nbsp;<strong>co\u2011evolution of capabilities<\/strong>&nbsp;\u2013 digital literacy plus judgement, moral sensitivity and legal alertness.<\/p>\n\n\n\n<p><strong>4. Lawfulness and legitimacy \u2013 the normative infrastructure<\/strong><\/p>\n\n\n\n<p>AI operates within a&nbsp;<strong>normative landscape<\/strong>&nbsp;in Europe that is layered and in motion. The basic principles of data protection have stood for years: lawfulness, purpose limitation, data minimisation, accuracy, storage limitation, integrity\/confidentiality and accountability.<\/p>\n\n\n\n<p>Around these sit rules for platforms, data access and portability, safety and duties of care in essential sectors (see notes 9\u201313). AI\u2011specific frameworks ask for controlled risk analysis, documentation, monitoring and options for correction.<\/p>\n\n\n\n<p>The administrative verb is&nbsp;<strong>evidencable<\/strong>. Not only being careful, but being able to show that you&nbsp;<em>were<\/em>&nbsp;careful: model cards, logbooks, versioning, impact assessments, override decisions with reasoning, incident notes with remediation. Fix counter\u2011force: who may stop the system, and when?<\/p>\n\n\n\n<p>Legitimacy is more than legality. It rests on&nbsp;<strong>explainability, contestability and repair<\/strong>. Whoever automates must also make visible&nbsp;<strong>what is not<\/strong>&nbsp;automated. Draw red lines: no behavioural steering without necessity and proportionality; no black box in contexts with high error costs and low explainability; no data collection without necessity and a clear legal basis. Make this explicit, in language that customers, clients, citizens and employees understand.<\/p>\n\n\n\n<p><strong>5. Identity and embodied work \u2013 the ability to carry tension<\/strong><\/p>\n\n\n\n<p>AI touches professional identity. If systems analyse, advise and create, what does it mean to be an expert or a leader? The task shifts from&nbsp;<strong>knowing more<\/strong>&nbsp;to&nbsp;<strong>being able to carry more<\/strong>: tension, uncertainty, slowness, conflict and moral ambiguity.<\/p>\n\n\n\n<p>Spiritual and humanistic language finds a concrete function here: staying oriented in a world that seems mechanically makeable. The core questions become:&nbsp;<strong>what will we not do, even if we can?<\/strong>&nbsp;Which applications do we refuse because they hollow out relationships, dignity or future?<\/p>\n\n\n\n<p>Embodied work is a precondition. AI accelerates rhythm and raises information pressure. The body pays first. Teams that organise rhythm, silence, recovery and reflection stay on course. Teams that skip that layer burn out \u2013 and that is not only a wellbeing issue but also a managerial risk: drop\u2011out, errors, eroding trust.<\/p>\n\n\n\n<p><strong>6. Methodical anchoring \u2013 inquiry as a practice of awareness<\/strong><\/p>\n\n\n\n<p>Most AI conversations strand in tool choice, pilots and risk lists. What is missing is&nbsp;<strong>methodical anchoring<\/strong>: a way of working where reflection, dialogue and meaning\u2011making are taken as seriously as accuracy and uptime.<\/p>\n\n\n\n<p>Qualitative methods \u2013 interview, casework, discourse and narrative analysis \u2013 are indispensable for understanding what AI does to experience and interaction. Numbers show patterns; stories show meaning. Combine this with&nbsp;<strong>action research<\/strong>: formulate hypotheses, follow effects, adjust, decide again. Every AI programme becomes a learning process.<\/p>\n\n\n\n<p>On the formal side this calls for an&nbsp;<strong>AI management system<\/strong>&nbsp;(see notes 14\u201319): policy and standards; processes for risk, validation, monitoring, incident response, retirement; competencies (model risk, data ethics, legal interpretation); assurance (internal audit, external review, maturity measurement). And \u2013 crucial \u2013&nbsp;<strong>due diligence<\/strong>&nbsp;toward suppliers: data sources and licences, evaluation sets and bias, red\u2011teaming, content authenticity, portability and exit options. Contract audit rights and sanctions for non\u2011compliance.<\/p>\n\n\n\n<p>Methodology thus also becomes&nbsp;<strong>ethics<\/strong>: how do we reach judgement, who is heard, which data count and which do not? Ownership of meaning must remain with people, not with the vendor. This aligns with international principles and guidance (see notes 20\u201321).<\/p>\n\n\n\n<p><strong>7. Paradoxes and conditions \u2013 the necessary discomfort<\/strong><\/p>\n\n\n\n<p>Managerial reality is a field of tensions. AI makes them visible and unavoidable:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Efficiency \u2194 resilience.<\/strong>\u00a0Faster and cheaper can be more fragile.\u00a0<em>Condition:<\/em>\u00a0choose resilience where disruption is costly; choose efficiency where recovery is easy.<\/li>\n\n\n\n<li><strong>Centralisation \u2194 autonomy.<\/strong>\u00a0One platform gives scale, but diminishes local wisdom.\u00a0<em>Condition:<\/em>\u00a0central standards, local variation in application.<\/li>\n\n\n\n<li><strong>Automation \u2194 human dignity.<\/strong>\u00a0Delegation relieves, but can de\u2011humanise.\u00a0<em>Condition:<\/em>\u00a0clear bounds for delegation, with contestability and repair.<\/li>\n\n\n\n<li><strong>Data minimisation \u2194 personalisation.<\/strong>\u00a0More data is not always better.\u00a0<em>Condition:<\/em>\u00a0make purpose limitation explicit, limit retention, demonstrate effect.<\/li>\n\n\n\n<li><strong>Explainability \u2194 performance.<\/strong>\u00a0Complex models sometimes perform better yet are less traceable. *Condition:*match model choice to error costs and context.<\/li>\n\n\n\n<li><strong>Openness \u2194 security.<\/strong>\u00a0Transparency builds trust, and increases attack surface.\u00a0<em>Condition:<\/em>\u00a0differentiate public, internal and confidential.<\/li>\n<\/ul>\n\n\n\n<p>Tensions do not disappear by policy; they require cadence and conversation. Leading is holding course amid contradictory truths.*<\/p>\n\n\n\n<p><strong>9. Minimal viable governance \u2013 doing a few things very well<\/strong><\/p>\n\n\n\n<p>Big ambitions succeed by&nbsp;<strong>doing few things consistently<\/strong>. A working minimum you can start with tomorrow:<\/p>\n\n\n\n<p><strong>Roles<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>System owner (accountable)<\/strong>\u00a0\u2013 mandate, resources, reporting.<br><em>Holds the mandate and bears integral responsibility.<\/em><br><em>Secures decision lines, resources and reporting toward the board.<\/em><\/li>\n\n\n\n<li><strong>Model owner<\/strong>\u00a0\u2013 performance, bias, lifecycle.<br><em>Steers performance, bias and ageing of the model end\u2011to\u2011end.<\/em><br><em>Guards validation, monitoring and timely retraining or retirement.<\/em><\/li>\n\n\n\n<li><strong>Data owner<\/strong>\u00a0\u2013 definitions, quality, access.<br><em>Defines sources, quality and access rights for data.<\/em><br><em>Guards purpose limitation, minimisation and data characteristics for models.<\/em><\/li>\n\n\n\n<li><strong>Risk &amp; compliance<\/strong>\u00a0\u2013 frameworks, tests, escalations.<br><em>Translates laws and standards into workable requirements in the process.<\/em><br><em>Orchestrates tests, exceptions and escalations with timeliness.<\/em><\/li>\n\n\n\n<li><strong>Security<\/strong>\u00a0\u2013 threats, resilience, incidents.<br><em>Assesses threats and vulnerabilities across the full chain.<\/em><br><em>Secures detection, response and recovery including supplier dependencies.<\/em><\/li>\n\n\n\n<li><strong>Business owner<\/strong>\u00a0\u2013 purpose, value, effects in practice.<br><em>Anchors problem definition, value and effect in operations.<\/em><br><em>Organises frontline feedback and decides stop\/go.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<strong>Cadence<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Monthly:<\/strong>\u00a0model monitoring and incident review.<br><em>Assess performance, drift and incidents with concrete actions.<\/em><br><em>Normalise small corrections so large failures do not arise.<\/em><\/li>\n\n\n\n<li><strong>Quarterly:<\/strong>\u00a0challenge session with independent counter\u2011force.<br><em>Provides organised dissent and alternative assumptions.<\/em><br><em>Prevents groupthink and refines thresholds and scope.<\/em><\/li>\n\n\n\n<li><strong>Semi\u2011annual:<\/strong>\u00a0audit of documentation, drift, fairness and rights.<br><em>Gives assurance on documentation, fairness and rights protection.<\/em><br><em>Makes improvement decisions traceable for audit and stakeholders.<\/em><\/li>\n\n\n\n<li><strong>On every change:<\/strong>\u00a0impact assessment, test report, go\/no\u2011go by the board with override criteria.<br><em>Conducts impact analysis and tests upfront with clear criteria.<\/em><br><em>Records the reasoning for go\/no\u2011go and any overrides traceably.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<strong>Artefacts<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Model card<\/strong>\u00a0(purpose, data, bounds, performance, fairness, explainability).<br><em>Describes purpose, bounds, data, performance and assumptions.<\/em><br><em>Forms the basis for explanation, audit and responsible use.<\/em><\/li>\n\n\n\n<li><strong>Logbook<\/strong>\u00a0(versions, parameters, overrides, incidents, remediation).<br><em>Records all versions, parameters, overrides and incidents.<\/em><br><em>Enables review, learning and liability.<\/em><\/li>\n\n\n\n<li><strong>Risk file<\/strong>\u00a0(error costs, context, mitigations, residual risk).<br><em>Translates error costs and context into mitigations and residual risk.<\/em><br><em>Supports decisions about model choice and degree of delegation.<\/em><\/li>\n\n\n\n<li><strong>User guide<\/strong>\u00a0in plain language (rights, redress, contact).<br><em>Explains rights, contestation and contact clearly to users.<\/em><br><em>Raises acceptance and makes contestability practical and low\u2011threshold.<\/em><\/li>\n\n\n\n<li><strong>Exit plan<\/strong>\u00a0(vendor lock\u2011in, portability, escrow).<br><em>Defines portability, escrow and migration steps upon termination.<\/em><br><em>Reduces vendor lock\u2011in and continuity risks.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<em>All this can be kept compact. It is not a paper factory; it is traceable craftsmanship.<\/em><\/p>\n\n\n\n<p><strong>10. From principle to practice \u2013 three design questions<\/strong><\/p>\n\n\n\n<p><strong>A. Where do we place the human counter\u2011force?<\/strong><br>In the loop (real time), on the loop (periodic oversight) or above the loop (framing). Choose deliberately per use case and explain why.<br><em>At\u2011the\u2011line lowers error costs when impact is high and tolerance is low.<\/em><br><em>Above\u2011 and on\u2011the\u2011loop secure proportionality and learnability over time.<\/em><\/p>\n\n\n\n<p><strong>B. How does exception become normal?<\/strong><br>Every rule has edge cases. Make exception\u2011making legitimate: who may do it, how is it reasoned, how is it learned from?<br><em>Set procedure, criteria and a duty to state reasons \u2013 no ad\u2011hoc leniency.<\/em><br><em>Document, share patterns and take periodic improvement decisions.<\/em><\/p>\n\n\n\n<p><strong>C. What degree of variety is needed?<\/strong><br>Standardise where safety and scalability require it. Allow variation where contextual knowledge adds quality. Use governance to&nbsp;<strong>make difference possible and traceable<\/strong>, not to erase it.<br><em>Too much standardisation costs quality; too much variety costs scale.<\/em><br><em>Define decision space and deviation routes explicitly per context.<\/em><\/p>\n\n\n\n<p><strong>11. What now, what later, what not \u2013 a practical stance<\/strong><\/p>\n\n\n\n<p><strong>Now (0\u20133 months)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Inventory<\/strong>\u00a0all AI applications and models; appoint owners.<br><em>Without visibility of the actual footprint, governance stays theoretical and risk unfocused.<\/em><br><em>A lean inventory prevents shadow IT (information technology), speeds audits and enables prioritisation.<\/em><\/li>\n\n\n\n<li><strong>Define risk appetite<\/strong>\u00a0and critical quality attributes (safety, fairness, privacy, explainability).<br><em>Without explicit boundaries, trade\u2011offs become implicit and arbitrary.<\/em><br><em>Clear thresholds steer design choices and speed decisions in incidents.<\/em><\/li>\n\n\n\n<li><strong>Set stop criteria and override authority;<\/strong>\u00a0arrange a practical kill\u2011switch.<br><em>A system that cannot be stopped safely is not safe to start.<\/em><br><em>Clear authorities limit damage, speed remediation and raise trust.<\/em><\/li>\n\n\n\n<li><strong>Start impact assessments<\/strong>\u00a0for the top three use cases; document data flows, assumptions and error costs.<br><em>Impact concerns people, processes and rights\u2014not only metrics.<\/em><br><em>Early insight prevents expensive redesigns and enables explainability to the outside world.<\/em><\/li>\n\n\n\n<li><strong>Set the cadence:<\/strong>\u00a0monthly monitoring and a quarterly challenge with independent counter\u2011force.<br><em>Without cadence attention drifts to incidents; with cadence learning capacity grows.<\/em><br><em>Independent dissent prevents groupthink and normalises course correction.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<strong>Later (6\u201312 months)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Build the AI management system:<\/strong>\u00a0processes, roles, audit trails, maturity measurement.<br><em>Quality arises not from intention but from repeatable work practices.<\/em><br><em>A light, traceable system makes performance scalable without bureaucracy.<\/em><\/li>\n\n\n\n<li><strong>Vendor due diligence:<\/strong>\u00a0licences, datasets, evaluations, red\u2011teaming, content authenticity, exit options.<br><em>What you outsource remains your responsibility toward customer and regulator.<\/em><br><em>Contract audit rights and exit options to limit lock\u2011in and legal exposure.<\/em><\/li>\n\n\n\n<li><strong>Train leaders and key roles<\/strong>\u00a0in judgement under uncertainty and legal interpretation.<br><em>Technical knowledge without judgement increases the precision of wrong decisions.<\/em><br><em>Developing shared language and criteria raises consistency and explainability.<\/em><\/li>\n\n\n\n<li><strong>Integrate reflection into operations:<\/strong>\u00a0case reviews, frontline narratives, improvement decisions with reasoning.<br><em>Stories give meaning to data and make effects felt and correctable.<\/em><br><em>Recording decision rationales creates a reproducible quality culture.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<strong>Not (unless exceptionally justified)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Irreversible delegation<\/strong>\u00a0of high\u2011impact decisions without human counter\u2011force.<br><em>Delegation without a brake harms dignity and legal protection when errors occur.<\/em><br><em>Therefore limit scope and secure contestability and remediation up front.<\/em><\/li>\n\n\n\n<li><strong>Black box<\/strong>\u00a0in contexts with high error costs and low explainability.<br><em>Where the price of error is high, traceability is part of safety.<\/em><br><em>Choose simplicity or hybrid models when explanation is essential for trust.<\/em><\/li>\n\n\n\n<li><strong>Data collection<\/strong>\u00a0without necessity, legal basis and clear retention period.<br><em>More data is not automatically better; it increases risks without proportional value.<\/em><br><em>Formulate purpose limitation, minimise, and ensure timely deletion.<\/em><\/li>\n\n\n\n<li><strong>KPIs (key performance indicators)<\/strong>\u00a0that perfectly measure the unimportant and say nothing about meaning and legitimacy.<br><em>Precision without relevance steers toward pseudo\u2011performance and undermines trust.<\/em><br><em>Tie metrics to value, rights and effects in lived reality.<\/em><\/li>\n<\/ul>\n\n\n\n<p>\u2800<\/p>\n\n\n\n<p><strong>12. Closing \u2013 Leadership as the art of carrying<\/strong><\/p>\n\n\n\n<p>AI is not an outsider dropping by; it is an&nbsp;<strong>extension of our capacities and our blind spots<\/strong>. It enlarges our ability and our lack. Leading in this era is the art of&nbsp;<strong>carrying<\/strong>: carrying tension, carrying plurality, carrying the slowness that quality needs. It asks for imagination&nbsp;<strong>and<\/strong>&nbsp;discipline, for inner orientation&nbsp;<strong>and<\/strong>&nbsp;traceable processes.<\/p>\n\n\n\n<p>The invitation is simple and demanding:<\/p>\n\n\n\n<p><strong>Make technology an ally of humanity and the rule of law.<\/strong><br><strong>Design counter\u2011force that works when it matters.<\/strong><br><strong>Keep the conversation going \u2013 especially where it rubs.<\/strong><\/p>\n\n\n\n<p>If you hold these three lines, AI becomes no black box but a clear mirror. Not machinery that replaces people, but an infrastructure that&nbsp;<strong>amplifies<\/strong>&nbsp;professional judgement, relational quality and public trust. That is transformation from within: step by step, visibly accountable, with a cadence that can be sustained.<\/p>\n\n\n\n<p><strong>Reflective question to close:<\/strong><br><em>Which decision in your organisation could be better tomorrow by less automation and more explanation \u2013 and whom will you invite today to look at it together?<\/em><\/p>\n\n\n\n<p><strong>Notes &amp; literature (selection)<\/strong><\/p>\n\n\n\n<p><strong>Theory &amp; concepts (human, learning, system)<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Ryan, R. M., &amp; Deci, E. L. (2000).\u00a0<em>Self\u2011Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well\u2011Being.<\/em>\u00a0<strong>American Psychologist, 55(1),<\/strong>\u00a068\u201378.\u00a0<a href=\"https:\/\/selfdeterminationtheory.org\/SDT\/documents\/2000_RyanDeci_SDT.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/selfdeterminationtheory.org\/SDT\/documents\/2000_RyanDeci_SDT.pdf<\/a><\/li>\n\n\n\n<li>Maslach, C., Schaufeli, W. B., &amp; Leiter, M. P. (2001).\u00a0<em>Job Burnout.<\/em>\u00a0<strong>Annual Review of Psychology, 52,<\/strong>\u00a0397\u2013422.\u00a0<a href=\"https:\/\/dspace.library.uu.nl\/bitstream\/handle\/1874\/13606\/maslach_01_jobburnout.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/dspace.library.uu.nl\/bitstream\/handle\/1874\/13606\/maslach_01_jobburnout.pdf<\/a><\/li>\n\n\n\n<li>Kolb, D. A. (1984). *Experiential Learning: Experience as the Source of Learning and Development.*Prentice\u2011Hall. (Overview:\u00a0<a href=\"https:\/\/books.google.com\/books\/about\/Experiential_Learning.html?id=zXruAAAAMAAJ\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/books.google.com\/books\/about\/Experiential_Learning.html?id=zXruAAAAMAAJ<\/a>)<\/li>\n\n\n\n<li>Argyris, C., &amp; Sch\u00f6n, D. A. (1978).\u00a0<em>Organizational Learning: A Theory of Action Perspective.<\/em>\u00a0Addison\u2011Wesley. (Public version:\u00a0<a href=\"https:\/\/archive.org\/details\/organizationalle00chri\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/archive.org\/details\/organizationalle00chri<\/a>)<\/li>\n\n\n\n<li>Mezirow, J. (1991).\u00a0<em>Transformative Dimensions of Adult Learning.<\/em>\u00a0Jossey\u2011Bass. (Summary:\u00a0<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-embed\"><div class=\"wp-block-embed__wrapper\">\nhttps:\/\/eric.ed.gov\/?id=ED353469\n<\/div><\/figure>\n\n\n\n<ol class=\"wp-block-list\">\n<li>)<\/li>\n\n\n\n<li>Orlikowski, W. J. (2007).\u00a0<em>Sociomaterial Practices: Exploring Technology at Work.<\/em>\u00a0**Organization Studies, 28(9),**1435\u20131448. (Open access variant:\u00a0<a href=\"https:\/\/www.dhi.ac.uk\/san\/waysofbeing\/data\/data-crone-orlikowski-2007.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.dhi.ac.uk\/san\/waysofbeing\/data\/data-crone-orlikowski-2007.pdf<\/a>)<\/li>\n\n\n\n<li>Stacey, R. D. (2001). *Complex Responsive Processes in Organizations: Learning and Knowledge Creation.*Routledge. (Overview:\u00a0<a href=\"https:\/\/www.routledge.com\/Complex-Responsive-Processes-in-Organizations-Learning-and-Knowledge-Creation\/Stacey\/p\/book\/9780415249195\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.routledge.com\/Complex-Responsive-Processes-in-Organizations-Learning-and-Knowledge-Creation\/Stacey\/p\/book\/9780415249195<\/a>)<\/li>\n\n\n\n<li>Ashby, W. R. (1956).\u00a0<em>An Introduction to Cybernetics.<\/em>\u00a0Chapman &amp; Hall. (Open access:\u00a0<a href=\"https:\/\/archive.org\/download\/AnIntroductionToCybernetics\/AnIntroductionToCybernetics.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/archive.org\/download\/AnIntroductionToCybernetics\/AnIntroductionToCybernetics.pdf<\/a>)<\/li>\n<\/ol>\n\n\n\n<p>\u2800EU framework (reference date 19 January 2026, Europe\/Amsterdam)<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>GDPR<\/strong>\u00a0\u2014 Regulation (EU) 2016\/679. Official publication: EUR\u2011Lex.\u00a0<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2016\/679\/oj\/eng\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/eur-lex.europa.eu\/eli\/reg\/2016\/679\/oj\/eng<\/a><\/li>\n\n\n\n<li><strong>DSA<\/strong>\u00a0\u2014 Regulation (EU) 2022\/2065 (Digital Services Act). Official publication: EUR\u2011Lex.\u00a0<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2022\/2065\/oj\/eng\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/eur-lex.europa.eu\/eli\/reg\/2022\/2065\/oj\/eng<\/a><\/li>\n\n\n\n<li><strong>Data Act<\/strong>\u00a0\u2014 Regulation (EU) 2023\/2854. Official publication: EUR\u2011Lex.\u00a0<a href=\"https:\/\/eur-lex.europa.eu\/eli\/reg\/2023\/2854\/oj\/eng\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/eur-lex.europa.eu\/eli\/reg\/2023\/2854\/oj\/eng<\/a><\/li>\n\n\n\n<li><strong>NIS2<\/strong>\u00a0\u2014 Directive (EU) 2022\/2555. Official publication: EUR\u2011Lex.\u00a0<a href=\"https:\/\/eur-lex.europa.eu\/eli\/dir\/2022\/2555\/oj\/eng\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/eur-lex.europa.eu\/eli\/dir\/2022\/2555\/oj\/eng<\/a><\/li>\n\n\n\n<li><strong>EU AI Act<\/strong>\u00a0\u2014 Implementation timeline and application dates. European Parliament Research Service (At\u2011a\u2011Glance, 10 June 2025; overview updated July 2024).\u00a0<a href=\"https:\/\/www.europarl.europa.eu\/thinktank\/en\/document\/EPRS_ATA%282025%29772906\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.europarl.europa.eu\/thinktank\/en\/document\/EPRS_ATA\\(2025\\)772906<\/a><br>\u2013 In brief: entered into force 1 August 2024; prohibitions applicable from 2 February 2025; broad application date 2 August 2026 (source: EPRS and multiple legal updates). See also\u00a0<a href=\"https:\/\/connectontech.bakermckenzie.com\/eu-ai-act-published-dates-for-action\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/connectontech.bakermckenzie.com\/eu-ai-act-published-dates-for-action\/<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.dlapiper.com\/en-us\/insights\/publications\/2025\/08\/latest-wave-of-obligations-under-the-eu-ai-act-take-effect\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.dlapiper.com\/en-us\/insights\/publications\/2025\/08\/latest-wave-of-obligations-under-the-eu-ai-act-take-effect<\/a><\/li>\n<\/ol>\n\n\n\n<p>\u2800Standards &amp; frameworks (management, risk, assurance)<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>ISO\/IEC 42001:2023<\/strong>\u00a0\u2014\u00a0<em>Information technology \u2014 Artificial intelligence \u2014 Management system.<\/em>\u00a0ISO.\u00a0<a href=\"https:\/\/www.iso.org\/standard\/42001\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.iso.org\/standard\/42001<\/a><\/li>\n\n\n\n<li><strong>ISO\/IEC 23894:2023<\/strong>\u00a0\u2014\u00a0<em>Artificial intelligence \u2014 Guidance on risk management.<\/em>\u00a0ISO.\u00a0<a href=\"https:\/\/www.iso.org\/standard\/77304.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.iso.org\/standard\/77304.html<\/a><\/li>\n\n\n\n<li><strong>ISO 31000:2018<\/strong>\u00a0\u2014\u00a0<em>Risk management \u2014 Guidelines.<\/em>\u00a0ISO.\u00a0<a href=\"https:\/\/www.iso.org\/standard\/65694.html\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.iso.org\/standard\/65694.html<\/a><\/li>\n\n\n\n<li><strong>NIST AI RMF 1.0<\/strong>\u00a0\u2014\u00a0<em>Artificial Intelligence Risk Management Framework.<\/em>\u00a0NIST (26 January 2023).\u00a0<a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.nist.gov\/itl\/ai-risk-management-framework<\/a>\u00a0(PDF:\u00a0<a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/ai\/NIST.AI.100-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/nvlpubs.nist.gov\/nistpubs\/ai\/NIST.AI.100-1.pdf<\/a>)<\/li>\n\n\n\n<li><strong>COSO ERM (2017)<\/strong>\u00a0\u2014\u00a0<em>Enterprise Risk Management\u2014Integrating with Strategy &amp; Performance.<\/em>\u00a0COSO.\u00a0<a href=\"https:\/\/www.coso.org\/enterprise-risk-management\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.coso.org\/enterprise-risk-management<\/a><\/li>\n\n\n\n<li><strong>IIA (2024)<\/strong>\u00a0\u2014\u00a0<em>Global Internal Audit Standards.<\/em>\u00a0The Institute of Internal Auditors.\u00a0<a href=\"https:\/\/www.theiia.org\/en\/standards\/2024-standards\/global-internal-audit-standards\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.theiia.org\/en\/standards\/2024-standards\/global-internal-audit-standards\/<\/a><\/li>\n<\/ol>\n\n\n\n<p>\u2800Principles &amp; guidance<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>OECD (2019, updated 2024)<\/strong>\u00a0\u2014 *Recommendation of the Council on Artificial Intelligence (AI Principles).*OECD.\u00a0<a href=\"https:\/\/oecd.ai\/assets\/files\/OECD-LEGAL-0449-en.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/oecd.ai\/assets\/files\/OECD-LEGAL-0449-en.pdf<\/a>\u00a0(Overview:\u00a0<a href=\"https:\/\/oecd.ai\/en\/ai-principles\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/oecd.ai\/en\/ai-principles<\/a>)<\/li>\n\n\n\n<li><strong>EU High\u2011Level Expert Group on AI (2019)<\/strong>\u00a0\u2014\u00a0<em>Ethics Guidelines for Trustworthy AI.<\/em>\u00a0European Commission.\u00a0<a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai<\/a><\/li>\n<\/ol>","protected":false},"excerpt":{"rendered":"<p>We argue that AI amplifies what already exists in power and behavior.<br \/>\nWe emphasize that governance and human counterforce are essential.<br \/>\nWe advocate for simple, traceable craftsmanship.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[81,72],"tags":[],"class_list":["post-2272","post","type-post","status-publish","format-standard","hentry","category-english","category-longread"],"_links":{"self":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2272","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/comments?post=2272"}],"version-history":[{"count":1,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2272\/revisions"}],"predecessor-version":[{"id":2273,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2272\/revisions\/2273"}],"wp:attachment":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/media?parent=2272"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/categories?post=2272"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/tags?post=2272"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}