{"id":2338,"date":"2026-02-11T11:53:00","date_gmt":"2026-02-11T11:53:00","guid":{"rendered":"https:\/\/dbvp.nl\/?p=2338"},"modified":"2026-05-11T11:54:39","modified_gmt":"2026-05-11T11:54:39","slug":"de-mens-in-het-algoritme","status":"publish","type":"post","link":"https:\/\/dbvp.nl\/en\/de-mens-in-het-algoritme\/","title":{"rendered":"The Human in the Algorithm"},"content":{"rendered":"<p>This blog explores how, as a society, we can ensure that algorithms support human values, rather than quietly shaping and determining our behavior.<\/p>\n\n\n\n<p>Algorithms are deeply woven into our daily lives\u2014from search results and news feeds to medical diagnoses and credit assessments. They offer convenience and efficiency, but also carry risks. Biases in data can lead to discriminatory outcomes, and the logic of algorithms is often opaque. It is tempting to see algorithms as neutral technology, but they are always a product of human choices, assumptions, and interests.<\/p>\n\n\n\n<p>Many decisions that were once made by people are now in the hands of systems we do not fully understand. A job vacancy may never reach you because an algorithm does not match your profile. A loan may be denied based on patterns that are not explainable to the applicant. And which news articles you read is largely determined by what the algorithm thinks will appeal to you\u2014and therefore also by what you do not get to see.<\/p>\n\n\n\n<p>The effect of this is subtle but profound. Our range of choices gradually shifts, often without us noticing. Our autonomy is influenced by logics we did not choose ourselves, and that may even run counter to our values.<\/p>\n\n\n\n<p>That is why we must continue to test technology against human values. This starts with making those values explicit: justice, transparency, inclusion, human dignity. They serve as a compass in the design, testing, and deployment of algorithms.<\/p>\n\n\n\n<p>A fair algorithm does not discriminate on the basis of irrelevant characteristics such as ethnicity, gender, or postal code. Transparency means that the system\u2019s functioning and decision-making are explainable to users. Inclusion means that diverse perspectives are involved in the design process, so that outcomes are not biased by a limited group of designers.<\/p>\n\n\n\n<p>Explainability\u2014often referred to as explainable AI\u2014is crucial for trust in algorithms. People must be able to understand why a decision was made, especially when that decision has major consequences for their lives. An algorithm that produces a medical diagnosis, for example, should not only provide a result, but also the underlying reasoning. Without such explanation, a technological black box emerges that makes meaningful discussion about values and justice impossible.<\/p>\n\n\n\n<p>A common misconception is that \u201cthe algorithm makes the decision,\u201d and that responsibility therefore lies with the technology. But behind every algorithm are people: developers, managers, policymakers. They determine which data is used, which rules apply, and which goals the system pursues.<\/p>\n\n\n\n<p>Responsibility therefore also means taking ownership of the consequences of algorithmic decisions. This calls for governance structures in which ethical review is as self-evident as technical quality control.<\/p>\n\n\n\n<p>Social media provide a clear example of how algorithms shape our reality. These systems are optimized for attention: the longer you keep watching, the better it is for the business model. This means algorithms often favor content that triggers emotion\u2014anger, outrage, sensation\u2014because it keeps you engaged. The result is that public debate hardens and polarization increases.<\/p>\n\n\n\n<p>Here lies a direct link to human values. If we want to preserve a healthy public sphere, we must critically examine how these algorithms are designed and which goals they serve. Do we want the attention economy to determine the course, or do we steer based on social and democratic values?<\/p>\n\n\n\n<p>Leadership in a data-driven world means having the courage to ask these questions\u2014not only from a technical perspective, but above all from a human one. It requires embedding ethics in policy, ensuring diversity in design teams, and continuous evaluation and adjustment. Technology is never finished; its social impact evolves and demands ongoing attention.<\/p>\n\n\n\n<p>Ultimately, the question is not whether we use algorithms, but how. Algorithms can add enormous value\u2014from accelerating the discovery of medical treatments to making logistics chains more sustainable. But only if we preserve the human scale will these systems continue to support us rather than steer us.<\/p>\n\n\n\n<p>Bringing the human into the algorithm means seeing technology as an extension of our values, not a replacement for them. It means accepting that technology is never neutral, and that it is our shared responsibility to use it for good.<\/p>\n\n\n\n<p>In a world increasingly driven by data, perhaps our greatest challenge is to remain human\u2014and to ensure that our technology does so as well.<\/p>\n\n\n\n<p><strong>Rene de Baaij<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>Deze blog onderzoekt hoe we als samenleving kunnen zorgen dat algoritmes ondersteunend zijn aan menselijke waarden, in plaats van dat ze ons gedrag ongemerkt bepalen. Algoritmes zitten diep verweven in ons dagelijks leven \u2014 van zoekresultaten en nieuwsfeeds tot medische diagnoses en kredietbeoordelingen. Ze bieden gemak en effici\u00ebntie, maar dragen ook risico\u2019s. Vooroordelen in data [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[82],"tags":[],"class_list":["post-2338","post","type-post","status-publish","format-standard","hentry","category-nederlands"],"_links":{"self":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2338","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/comments?post=2338"}],"version-history":[{"count":1,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2338\/revisions"}],"predecessor-version":[{"id":2339,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/posts\/2338\/revisions\/2339"}],"wp:attachment":[{"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/media?parent=2338"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/categories?post=2338"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbvp.nl\/en\/wp-json\/wp\/v2\/tags?post=2338"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}