Be who you are.

Implementing AI without losing yourself 1/5

This is not a manual, but my workbook-in-public: five short essays about implementing AI without losing yourself. We start with intention and boundaries, move through roles, power and the undercurrent, and end with the question of what we as an organization can actually carry of what is technically already possible. I look with you at both the surface layer and the undercurrent, and I close with a compact AI compass on one A4.

1 – Why AI only works once you first take yourself seriously.

(Taking yourself seriously is a good idea anyway.)

The real work starts before the technology.
This opening blog sets the playing field: not what AI can do, but what AI does to you—in choices, relationships, and responsibility.

AI is everywhere. Tools, dashboards, pilots, consultants. The temptation is to mainly ask: What can this do for us? A mature governance question is different: What does this do to us—to our way of leading, steering, and collaborating?

In this five-part series, I put three management questions on the table that you must answer before you scale up:

  1. Intention, values and boundaries – what is AI for, and what is it explicitly not for?
  2. AI as a socio-technical system – AI does not only change processes, it also changes power, responsibility and the undercurrent.
  3. Governance, risks and learning architecture – how do you ensure AI does not grow faster than your ability to deal with it?

The temptation of the tool

Most organizations start with: Which AI tool can we quickly deploy to save time? That feels rational, but it is often a technical flight forward. In doing so, you avoid three uncomfortable conversations:

  • What do we actually stand for as an organization?
  • What may never be determined by an algorithm?
  • What does it mean for my role as a leader if AI takes over part of the thinking work?

Three tensions AI magnifies

Whether you want it or not: AI makes existing tensions sharper.

  • Efficiency versus the human scale – do you mainly want to do more with fewer people, or do you want to free up time for attention, judgment and relationship?
  • Data-driven steering versus professional autonomy – does the system become leading, or does the judgment of professionals remain central? What do you say to someone who disagrees with the AI outcome?
  • Control versus trust – do you use AI to help or to control? What do people read into your choices?

The three focal points at a glance

  • Intention, values and boundaries – without explicit intention, AI quietly becomes the new carrier of culture; you end up with an organization that does what it can, not what it wants to embody.
  • AI as a socio-technical system – AI redesigns roles, power and relationships. Who is allowed to overrule what? Who explains to the customer that “the system” was wrong?
  • Governance, risks and learning architecture – not a one-off project, but an ongoing dialogue between technology, ethics and practice.

What now, what later, what not

Now: name one place where AI is already decisively influencing things and ask the question: what does this do to our way of working and speaking?
Later: design a rhythm (quarterly) in which the executive team and key figures reflect on AI impact.
Not: start AI pilots that primarily increase control without having the conversation about trust.

Reflection question

Are you mainly focused on tools and use cases, or is there also already a conversation about who you want to be in an AI-rich future?