Be who you are.

AI and the prelude to a coalition agreement

We argue AI is a governance issue, not an ICT dossier.
We call for acceleration with clear boundaries.
We insist AI must not dilute responsibility.

This blog is a response to the piece published yesterday (2 December 2025, Europe/Amsterdam), ‘Substantive and ambitious agenda of D66 and CDA’ (source: D66 and CDA). Because AI is mentioned only to a limited extent in it, I add below the necessary paragraph: ‘AI: opportunity and threat’.

In policy, we often speak about digitalization as if it were a support line for implementation. AI shows that it works the other way around: the way we govern, collaborate, and create value is partly shaped by algorithms. AI is therefore not an ICT dossier, but a choice about people and systems. If we do not make that explicit, AI organizes our reality based on implicit assumptions.

In the agenda, AI is mainly mentioned as a technological lever for productivity and innovation. Rightly so. But the same technology also increases the risk of creeping damage: bias in decision-making, concentration of power and data, dependency on platform suppliers, security of critical infrastructure, erosion of craftsmanship. That calls for something policy rarely does: accelerating and setting boundaries at the same time. Precisely for that reason, a paragraph should be included that addresses AI as a socio-technical transition, not merely as a tool.

Why now?

AI quietly shifts the division of roles between human and system. Who is allowed to conclude? Who bears responsibility? How do we safeguard dignity in automation? As long as these questions remain unnamed, routines and suppliers will fill them in. That is predictable, but not necessarily wise. A mature AI paragraph brings together three elements: design (human-centered), practice (craftsmanship, data governance), and governance (rules, oversight, liability).

Proposed paragraph for the agenda

AI: opportunity and threat

The Netherlands embraces artificial intelligence as a driver of innovation and public value. At the same time, we recognize the risks to dignity, justice, and security. We therefore choose acceleration with boundaries.

  1. Human-centered design as the norm.
    In the deployment of AI, the principle applies: human-in-control, explainable where required, and demonstrably safe where it matters. Systems that support decisions about people are designed for auditability, reversibility, and proportional use.
  2. Public data and model sovereignty.
    We invest in a reliable data infrastructure, public models where appropriate, and exportable standards for audit, logging, and documentation. In procurement, government and vital sectors explicitly define: ownership of data, reproducibility of outcomes, and exit options.
  3. Risk-based application.
    In high-impact domains (healthcare, security, work, children), we apply stricter requirements regarding quality, bias mitigation, robustness, and human oversight. For generative AI, this means: watermarking, source accountability, and measures against deepfakes in democratic processes.
  4. Professional expertise + machine power.
    AI does not replace professional judgment but enriches it. We invest in training and a new division of roles: who interprets models, who challenges them, who signs off? Leaders safeguard feedback loops between practice, data, and algorithms.
  5. Oversight and liability.
    We establish an independent “Algorithm Authority”: registry, assessment framework, model card, damage reporting point, and a route for suspension or shutdown in case of risks. Liability follows the decision chain; citizens retain effective rights to objection and redress.
  6. Innovation with a societal mission.
    We stimulate startups, research clusters, and applications addressing public challenges (energy, mobility, healthcare, education), fostering competition on safety and reliability, not only on speed.

What this asks of leaders

AI demands more precise leadership. Not stricter control, but clearer boundaries. Explicitly name ethical conditions, normalize the conversation about risk and harm, and design structures in such a way that responsible behavior becomes the easiest option. Work simultaneously on language (what do we mean by “deciding”?), rhythm (when do we recalibrate models?), and role clarity (who carries which responsibility?). In this way, AI does not become the invisible manager, but a tool that enhances human dignity and public value.

Final paragraph

My core position is simple: AI must enhance public value and human dignity, and may never dilute responsibility. Speed without boundaries is governed arbitrariness; boundaries without pace are a missed societal opportunity. Leadership is therefore the precision craft of simultaneously inviting and limiting, with awareness of the undercurrent of motives, fear, and performance pressure.