**Reflection on the Rise of Artificial Intelligence and the Question of How to Keep the Human Scale Central**
This blog explores how artificial intelligence affects our work and lives, and what is required to develop and use technology in a way that safeguards humanity, ethics, and connection.
.
Artificial intelligence has, in a short period of time, penetrated almost all domains of our lives. From medical diagnoses to legal advice, from education to the creative industries: AI promises speed, efficiency, and new possibilities. But this promise is accompanied by fundamental questions. Behind the scenes, complex algorithms analyze, predict, and sometimes even attempt to influence our behavior. These algorithms often operate on a scale and at a speed that exceed our human capacity to fully comprehend them.
Who determines how AI makes decisions? How do we prevent algorithms from reinforcing existing biases? What happens to human skills when we increasingly delegate tasks to machines? And how do we preserve the human scale in systems that, due to their scale, computing power, and access to data, can act much faster and often more convincingly than we can? These questions are not technical details, but touch the core of our social order and fundamental values such as equality, justice, and autonomy.
The challenge lies not only in technology, but above all in choices about values. Developers, policymakers, and users share responsibility for how AI is deployed. This calls for transparency: can we see how decisions are made? For evaluation: are algorithms tested for fairness, bias, and impact? And for the active inclusion of different perspectives: who is at the table when the rules of the game are defined? Too often, this is still a select group of technologists and investors, while the technology they design affects the lives of millions.
Keeping the human scale in AI means designing and using technology that supports human dignity, autonomy, and connection. This is only possible if ethical frameworks are built in from the outset, rather than added afterward as corrective mechanisms. It requires diversity in development teams, so that cultural biases and blind spots are recognized early. It also calls for mechanisms that give citizens a voice in how technology shapes their lives, for example through citizen panels, ethical review committees, or open consultations.
Leadership in this domain means having the courage to look not only at what is possible, but above all at what is necessary and desirable. It means making space for ethical reflection and societal dialogue, even if that takes time or slows innovation. It also sometimes means saying “no” to applications that may seem profitable or efficient in the short term, but are harmful to privacy, equality, or social cohesion. It requires the courage to make the human scale a firm boundary condition, not a side issue.
Examples show that this is possible. In healthcare, AI systems are successfully used to support doctors in making diagnoses, but not to replace them. This preserves personal contact and human moral judgment for patients, while allowing doctors to benefit from rapid data analysis. In education, adaptive learning systems help teachers tailor material to individual learning needs, without losing the human contact that is so important for motivation and development. In the justice system, there are pilot projects in which AI recommenders are used to provide judges with relevant case law, while the judge always retains the final decision.
Nevertheless, vigilance is required. The temptation to fully delegate decisions to algorithms grows as systems appear to perform better than humans at certain tasks. But efficiency is not the same as wisdom. Human judgment is rooted in context, empathy, and moral awareness—qualities that no algorithm can fully replicate. We must continue to ask ourselves: which decisions do we, as a society, want to keep in human hands precisely because they are too important to entrust to machines?
The future of AI is not a fixed path, but a route we shape together. If we are willing to approach technology with both curiosity and critical questioning, we can ensure that innovation goes hand in hand with human dignity. Preserving the human scale is not a nostalgic longing for a time without technology, but a conscious choice to link progress to the values that make us human. This calls for leadership that holds course amid technological promises and societal uncertainty.
Rene de Baaij

