Limits to Technological Autonomy
**Summary**
This blog explores how AI regulation can help safeguard human agency in a time when technology is becoming increasingly autonomous. How can we, as leaders, find the balance between innovation and ethics?
The debate on AI regulation has accelerated worldwide. The paradox is stark: technology, once praised for its potential to advance humanity, is now also seen as a source of concern. Where autonomy was once technology’s strength, it now appears to pose a threat to human agency. What does it mean when technology gains more control over our lives than we ourselves can exercise? And how can leadership guard these boundaries without stifling progress?
The discussion about AI regulation is not a purely technical matter, but revolves around values, power, and responsibility. It is about the search for balance between innovation and ethics, between freedom and control. What does leadership require when the technology we create begins to challenge the very foundations of our society?
The System of Regulation: Restoring Human Agency
From a systemic perspective, establishing rules for technology is an attempt to restore human agency in a world increasingly dominated by algorithms and autonomous systems. Autonomy can be powerful—consider medical AI that can make diagnoses faster than a human—but once systems make decisions without human review, power subtly shifts from people to machines.
The speed at which technology develops makes it difficult for legislation and regulation to keep pace. Many countries struggle with the question: should we first fully understand what a technology does, or should we already create frameworks to prevent it from getting out of control? This tension is intensified by economic competition: those who regulate too slowly risk falling behind.
In this light, regulation is not a barrier to innovation, but a framework that points the way forward. It functions as a safety net that prevents technology from escaping human control. When systems threaten to become detached from their designers, setting boundaries becomes inevitable.
For leaders, this means not only complying with existing laws, but also actively contributing to shaping policy. Not reacting to abuses after the fact, but thinking through scenarios in advance, mapping risks, and embedding ethical principles in design and implementation.
Psychodynamic Perspective: Control as a Defense Against Complexity
From a psychodynamic perspective, the need for control over technology is often driven by fear of the unknown. As technology becomes more complex, the feeling that we are losing grip increases. That fear is not irrational: algorithms sometimes learn in ways that even their creators do not fully understand. The human reflex, then, is to fall back on rules and oversight.
Die reflex kan twee kanten op werken. Enerzijds kan angst ons beschermen – door ons te dwingen na te denken over grenzen voordat er onherstelbare schade optreedt. Anderzijds kan dezelfde angst leiden tot krampachtigheid: overregulering die innovatie smoort, of een cultuur waarin elk experiment wordt gezien als een risico in plaats van een kans.
This reflex can work in two directions. On the one hand, fear can protect us—by forcing us to think about boundaries before irreversible damage occurs. On the other hand, the same fear can lead to rigidity: overregulation that stifles innovation, or a culture in which every experiment is seen as a risk rather than an opportunity.
Leadership in this tension calls for self-reflection: which fears and assumptions do we ourselves bring into the conversation about AI? And how do these influence our willingness to allow space for innovation?
Legal and Societal Dimensions
Legally, we are lagging behind reality. AI knows no national borders, but laws do. This means that companies often shop between jurisdictions in search of the most lenient rules. This creates an uneven playing field and undermines the possibility of enforcing collective standards.
Societally, there is a second issue: the degree of trust citizens have in the institutions that regulate technology. If regulation is perceived as an instrument of power rather than protection, it can lead to resistance—think of protests against digital surveillance or facial recognition. Without broad support, rules are more likely to be seen as obstacles than as safeguards.
A robust approach therefore requires transparency and inclusion. Legislation should not be created solely by lawyers and technologists, but in dialogue with society. This means involving citizens, ethicists, artists, scientists, and entrepreneurs in the creation of frameworks.
Leadership in Action: A Mini-Case
Imagine this: you are the CEO of an AI company that develops systems capable of making autonomous decisions. You know the possibilities are enormous—from logistical optimization to personalized medical care. But you also realize that your technology can be misused, for example for political influence or discriminatory selection.
The market is pushing for speed: those who innovate now win the competitive race. At the same time, calls for regulation are growing louder. As a leader, you stand at a moral crossroads: do you maximize growth, or do you deliberately build in safety mechanisms that slow development but strengthen public trust?
Responsible leadership here means choosing the latter path. It means establishing internal ethics committees, conducting audits on bias and transparency, and making clear agreements about how and by whom your systems may be used. This may cost you market share in the short term, but it strengthens your legitimacy and sustainability in the long term.
A Systemic Outlook: Working Together on Boundaries
Setting limits on technology is not the task of a single party. Governments, companies, civil society organizations, and citizens must work together. This requires a shared narrative: why are we setting these boundaries, for whom are we doing it, and which values do we want to protect?
Working systemically also means looking to the long term: what effects will our regulation have in five, ten, or twenty years? A boundary that is too rigid can stifle future innovation; a boundary that is too loose can cause damage that cannot be undone. Finding that balance is a continuous process, not a one-time intervention.
What Are You Willing to Face?
What does this time ask of you—not only as an executive, but as a human being? How do you ensure that technology remains in service of society, rather than the other way around? Which boundaries are you willing to set, even if that means giving up short-term profit for long-term value?
Leadership in a time of technological acceleration requires more than vision. It requires an ethical compass that holds course amid uncertainty. Technology is not an end in itself, but a means. The question is: do we dare to continue seeing it that way?
*Rene de Baaij*

