Zijn wie je bent.

AI and the Human Scale

We show that AI is changing our work and raising ethical questions. We argue for human scale and transparent choices. We believe leadership is about what is desirable, not only what is possible.

Reflection on the rise of artificial intelligence and the question of how we can keep the human scale central

This blog explores how artificial intelligence influences our work and lives, and what is needed to develop and use technology in a way that safeguards humanity, ethics, and connection.

Artificial intelligence has, in a short time, penetrated almost all domains of our lives. From medical diagnoses to legal advice, from education to the creative industry: AI promises speed, efficiency, and new possibilities. But this promise comes with fundamental questions. Behind the scenes, complex algorithms analyze our behavior, predict it, and sometimes even attempt to influence it. These algorithms often operate on a scale and at a speed that exceed our human ability to fully grasp what is happening.

Who determines how AI makes decisions? How do we prevent algorithms from reinforcing existing prejudices? What happens to human skills when we delegate more and more to machines? And how do we preserve the human scale in systems that, because of their scale, computing power, and access to data, can act much faster and often far more convincingly than we can? These questions are not technical details, but touch the core of our social order and fundamental values such as equality, justice, and autonomy.

The challenge lies not only in technology, but especially in choices about values. Developers, policymakers, and users share responsibility for how AI is deployed. That requires transparency: can we see how decisions are made? It requires evaluation: are algorithms tested for fairness, bias, and impact? And it requires actively including different perspectives: who is at the table when the rules of the game are made? Too often, this is still a select group of technicians and investors, while the technology they design affects the lives of millions.

Human scale in AI means designing and using technology that supports human dignity, autonomy, and connectedness. That is only possible if we build ethical frameworks in from the beginning, instead of adding them afterwards as a corrective mechanism. It requires diversity in development teams, so that cultural biases and blind spots are recognized early. It also requires mechanisms that give citizens a voice in how technology shapes their lives, for example through citizen panels, ethical review committees, or open consultations.

Leadership in this domain means having the courage to look not only at what is possible, but especially at what is necessary and desirable. It means creating space for ethical reflection and public dialogue, even if that costs time or slows innovation. It also sometimes means saying “no” to applications that seem profitable or efficient in the short term, but are harmful to privacy, equality, or social cohesion. It requires the courage to set the human scale as a hard boundary condition, not as a side issue.

Examples show that it is possible. In healthcare, AI systems are successfully used to support doctors in making diagnoses, but not to replace them. In this way, patients retain personal contact and the moral judgment of a human being, while doctors benefit from fast data analysis. In education, adaptive learning systems help teachers tailor learning materials to individual learning needs, without losing the human contact that is so important for motivation and development. In the legal system, there are pilots where AI recommenders are used to provide judges with relevant case law, while the judge always retains the final judgment.

Still, vigilance is required. The temptation to leave decisions entirely to algorithms grows as systems appear to perform better than humans in certain tasks. But efficiency is not the same as wisdom. Human judgment is rooted in context, empathy, and moral awareness—qualities that no algorithm can fully replicate. We must continue to ask ourselves: which decisions do we, as a society, want to keep in human hands, precisely because they are too important to leave to machines?

The future of AI is not a fixed path, but a route we shape together. If we are willing to approach technology with both curiosity and critical questions, we can ensure that innovation goes hand in hand with human dignity. Preserving the human scale is not a nostalgic longing for a time without technology, but a conscious choice to connect progress to the values that make us human. That requires leadership that stays the course amid technological promises and social uncertainty.

Rene de Baaij