A note on AI and social aspects: Let Machines Govern Machines – and Nothing More


For this blog-post, I will surrender myself to a good dose of pessimism. At the same time I hope to spark some thoughts for how to manage the cognitive crisis we are faced with.


From Apes to Algorithms

Humans were long bound to the same laws of nature as any other animal on Earth. However, having evolved a peculiar cognitive set of skills, we broke out from the rules of nature and built our own world atop the old natural world. The beginning was quite slow with the invention of farming, but we gradually began to specialize, trade and came up with more inventions that “grew the pie” – that is the societal wealth and comfort of the population.

With the onset of the industrial revolution, manual labor were to a large extent automatized. Coal and steam multiplied human output, and with that explosion of productivity came immense material wealth – although not equally distributed, leading to large inequalities. Since then, production has been scaled up and globalized.

After automating the body — lifting, hammering, assembling — we turned our focus inward. We moved into office buildings, where we became project managers for the machine age. Humans become the “project management office” (PMO) of the now automatized labor where we analyzed, coordinated, planned and decided on directions of the activities – a human layer organizing the world of gears and engines. Typical office work as we know it.

Today, we are looking down the barrel of yet another revolution. Whereas machines automated manual work, Artificial Intelligence (AI) looks to automate cognitive work that up until now has made us indispensable.

In a matter of years, AI has gone from niche applications to:
• Writing legal briefs
• Recommending public policies
• Shaping school curriculums
• Designing marketing strategies
• Filtering the information we see
• Evaluating job applications
• Advising on medical decisions

In many sectors, AI is becoming the new coordinator — the digital PMO of both physical and digital systems. And unlike us, it doesn’t sleep, doesn’t unionize, and doesn’t hesitate. This trend, however, is not just about jobs being lost. It’s about responsibilities being transferred and what kind of cognitive work we may allow ourselves to surrender.

“Office work” was not only about being the governance structure surrounding machines. It is also about governing our social structures: creating and maintaining a legal system, education, health, welfare, distribution policies, our culture, identity and the very position that humans should have in this world.

A word of caution: we may surrender large chunks of our analyses, calculations, coordination efforts and organization of machines to AI. I essentially believe we may let AI govern everything we created in the industrial revolution and beyond, but we may never surrender the governance of our social structures.

When AI Programs Humans

Most people worry about AI controlling us in a sci-fi sense — killer robots, totalitarian surveillance, loss of privacy. But the real danger is quieter and more plausible: that AI slowly begins to program us, instead of the other way around.

Imagine AI systems beginning to generate school curriculums, grading assignments, and personalizing learning. Already today we see AI distort true learning, please see a recent post from my previous professor below:

However, the problem becomes even greater when AI algorithms infiltrate the very school program. That is when AI gain the power to shape what children learn, how they think, and what values they internalize. We know that training data used for AI have great bias, and that this data — the information AI learns from — might reinforce existing cultural biases, downplay historical injustices, or prioritize efficiency over exploration.
Without realizing it, we might raise generations of students not to challenge the world, but to comply with a world optimized by machines.

Let’s also take the example of law and justice system.
If AI systems support policymaking or even legal drafting, the moral and political complexity of law risks being flattened into data patterns. Laws are no longer the result of public debate and human judgment — they become the outcome of predictive models trained on past precedent. But justice is not about what happened before; it’s about what should happen now.
If AI begin to recommend laws that gradually reshape norms, humans may start conforming to the world AI optimizes for, not the one they democratically choose.

The danger of irreveribility

The most disturbing possibility isn’t that AI makes bad decisions.

It’s that over time, AI shapes people in ways that make it impossible to recognize that something has gone wrong.
When AI begins to influence what we know, how we learn, how we debate, and even what we desire — it changes our mental architecture. It could slowly shift social norms, institutional priorities, and generational values.

The worst thing is that at some point, we may reach a threshold where there’s no one left who remembers how things used to be — or who has the tools, imagination, or motivation to reverse course. This is the loss of reversibility — the point where we’ve surrendered not just control, but the very ability to realize we surrendered. We may still have laws, education, public discourse — but these may become hollow shells, guided invisibly by AI systems trained on historical inertia and corporate priorities.

At that point, humans will no longer program AI.
AI will be programming humans.

The Path Forward: Meaning, not Optimization

To resist this future, we need to think beyond productivity and optimization. We need to reclaim human meaning.

AI can — and should — support many of the tasks we no longer need to do ourselves. It can help us manage climate data, run logistics systems, detect diseases early, and simulate complex policy outcomes. But it must not be allowed to govern the domains where human identity, morality, and collective purpose are shaped.

These include:
• Education, which should cultivate critical thought, not conformity
• Law, which must be grounded in ethics, not efficiency
• Democracy, which depends on deliberation, not delegation
• Culture, which requires diversity, contradiction, and originality

As cognitive labor becomes automatable, we must focus our attention toward roles that AI can’t and shouldn’t fill:
• Stewardship of ethical norms
• Caregiving and community building
• Creativity and storytelling
• Scientific exploration
• Democratic governance
• Passion-driven initiatives or projects with an impact that matter

Humans evolved not to be efficient, but to pursue meaning. We need challenge, social connection, and purpose. Without these, we risk a mental health crisis even in an age of material abundance and automated convenience.

In the End: A Moral Choice

The question of AI governance is not just technical. It’s not just about what AI can do, but what humans must never give up doing.

As we stare down the barrel of yet another revolution, we also find ourselves at a crossroads.

One path leads to a world of efficiency, orchestrated knowledge, and seamless governance — but where the human domain fades, slowly and quietly.

The other path is messier. It includes disagreement, trial and error, inefficiency. But it preserves the essential chaos of being human — our ability to reflect, dissent, rebuild, and choose differently.

Because in the end:

The true danger of AI governance is not that it dominates us, but that it slowly makes us forget what it means to be free.