René de Baaij

From System to Meaning

AI and Humanity

AI and Humanity

**Summary**

This blog explores how AI in HR processes can increase efficiency, but may also reduce human closeness. How do we preserve humanity when systems take over more and more of our work?

We call it efficiency—but do we perhaps really mean distance? More and more organizations are integrating AI into their HR processes, from recruitment and evaluation to exit interviews. It sounds logical, modern, forward-looking. Yet something feels uneasy. What does it mean to entrust human care and judgment to a system without a body, without a voice, without shame? What exactly are we handing over? And what do we prefer to keep out of sight?

AI is often presented as neutral and objective, but in reality it is a mirror of the values, assumptions, and blind spots of the people and organizations that design and use it. In that mirror we see not only data, but also our own preferences, biases, and insecurities reflected—often without realizing it.

The Psychodynamics Behind Technology

The psychodynamics of technology are rarely discussed in boardrooms. AI is not merely a tool; it is a new carrier of our unconscious dynamics. In Tavistock terms: the organization projects its unresolved tensions onto its structures, and technology becomes a carrier of those projections.

Carl Jung would say that the unconscious always seeks a new form. What we as humans prefer to avoid—uncertainty, bias, ambivalence—finds its way into the algorithm. Not because AI is malevolent, but because we deposit our own discomforts into it.

The paradox is clear: the more we automate in the name of objectivity, the more we conceal what actually wants to be heard. The desire to be fair and efficient strips processes of their humanity—and with it the container in which genuine encounter can take place.

A Mini-Case: Where Data and Experience Meet

An international tech organization introduced an AI-driven system for performance evaluations. Managers provided scores; AI formulated the feedback. “Efficient, consistent, fair,” read the internal message. But in the corridors, distance grew. Employees no longer felt seen. Feedback became something “the system” said, not something a human being told them.

One manager chose a different approach. She did not use the AI output as an endpoint, but as a mirror. She asked herself: what do I see—or not see—in this text? Where does my own sense diverge from the outcome? Instead of adopting the judgment, she entered into conversation with the employee about the difference between data and experience.

There, in that difference, something new emerged: an encounter. The employee felt seen after all. Not despite the technology, but because the manager did not outsource her judgment and instead took responsibility for it.

This shows that leadership is not about flawlessly applying systems, but about being present with what does not fit into the system—and being willing to put words to it.

Systemic Effects of AI in Organizations

From a systemic perspective, AI changes not only processes, but also relationships. When decisions are increasingly made by systems, the locus of responsibility shifts. Who still feels ownership of a decision when it “comes from the system”? And what does that do to trust—both among employees and among leaders?

When AI makes decisions that affect people, there is a risk that human relationships become more functional and more distant. Not because people want this, but because the system shapes the form of interaction. This can lead to:

- Loss of relational depth: conversations become more transactional and less personal.

- Displacement of responsibility: decisions are attributed to the technology, blurring ownership.

- Reinforcement of existing inequalities: biases in data are quietly reproduced.

The system then does not only operate within the organization, but on the organization—it changes the way people relate to one another.

Psychological Mechanisms: Distance and Reassurance

From a psychological perspective, AI offers leaders and HR professionals a form of reassurance. The system creates the illusion of objectivity and makes difficult conversations feel “safer” by reducing the personal charge. But it is precisely that charge that can be essential for meaningful interaction.

Delegating difficult decisions to technology can also function as a defense mechanism. If the message does not land well, responsibility can be shifted: “That’s just what the system says.” This protects the sender, but can leave the receiver feeling that genuine connection or dialogue is impossible.

Leadership and the Choice for Presence

The question, then, is not only what AI can do, but what we want to do with it. AI can be a powerful tool for efficiency, but leadership demands more than efficiency. It demands presence, courage, and the ability to not shy away from discomfort.

Presence means:

- Engaging in conversation yourself, even when it is difficult.

- Seeing technology as a support, not as a replacement for judgment or empathy.

- Making explicit which values guide decisions.

In a time when systems are taking over more and more, choosing closeness can be an act of leadership.

A Reflective Invitation

Which parts of yourself do you entrust to technology?

And what happens to your leadership when you outsource your judgment, your timing, your intuition?

Perhaps this time does not call for better algorithms, but for greater presence—especially in the places where things become uncomfortable. AI can do a great deal, but it cannot bring the warmth, nuance, and courage that make human contact meaningful. That remains our domain—and perhaps that is our greatest responsibility.

*Rene de Baaij*