“Many believe that artificial intelligence is ushering in the golden age of totalitarianism,” says Yuval Noah Harari, in an interview with Handelsblatt. The famous Israeli historian, philosopher and global best-selling author has inspired business leaders and heads of state with his thoughts. He is a historian, but he doesn’t just look at the past. For him, dealing with history always means dealing with change.
Democracy – says Harari – is based on information technology, because democracy is nothing more than a big debate. Historically, large democracies have always been impossible because there was no technology to organize conversations between millions of people separated by thousands of kilometers. This only became possible at the end of the modern period, after the advent of modern information technologies – newspapers, radio and television. These media formed the technological basis for democratic debate. Now a powerful new information technology is emerging, artificial intelligence, which is somehow shaking the foundations of the democratic order. And that is the earthquake we are experiencing right now, at least in terms of the crisis of democracy.
Protagonist technology, not a tool
For the first time, this technology stops being a tool, it becomes a protagonist: artificial intelligence performs independent actions. It is the first technology in human history that can make decisions and develop new ideas independently.
A nuclear bomb could not decide what to bomb, unlike new autonomous weapons systems.
A press copied human thoughts, but was unable to develop its own ideas. Artificial intelligence is capable of creating new texts, images, videos, medicines, military strategies and financial instruments. Technology can manipulate us at any time.
But also in the financial sector, with artificial intelligence, automated systems send billions of dollars around the world every day. Therefore, if a financial AI is developed and tasked with making money, it will do so.
“Imagine,” says Harari, that we live in a world where artificial intelligence invents entirely new financial instruments, so complex that no human being can understand and regulate them. First, people or companies make billions from it. But then there may be a collapse – and no one will be able to understand what is happening. And this will be a much more likely scenario of an AI Apocalypse than an “army of killer robots” that some Hollywood scenarios predict.
The exit
Of course, some will say that there will be no problem because we will simply pull the plug. But decision-making probably won’t shut down so quickly when technology is generating billions around the world.
“I’m not worried about a big computer trying to take over the world. But for the millions of AI bureaucrats and bankers who will make decisions in ever larger parts of our lives. Because we won’t understand things enough to actually regulate them.”
As in Kafka’s famous “Trial”, where the chief teller of a bank is unexpectedly arrested by two unknown agents from an unspecified agency for an unspecified crime, which is not revealed to him or the reader. The treasurer is not arrested, but is “free” to await instructions from the Affairs Committee, until finally two men arrive at his apartment to execute him, “like a dog”!
“Democracies are very vulnerable”
Belgian philosopher Marc Kukelberg, professor of Philosophy of Media and Technology at the University of Vienna, writes in his book “The Political Philosophy of AI” that we can learn a lot from Kafka’s “Trial”. On the one hand, there is the feeling of being lost in a maze. There are many crises happening at once right now. People feel oppressed by forces they cannot control.
Kukelberg exposes basic principles of democracy that are at risk with the use of artificial intelligence.
“Perhaps most important: threats to freedom and equality. AI endangers people’s freedom when they are no longer able to form their own opinions, but are manipulated – through automatically generated posts on social media… Equality, another fundamental democratic principle, is also threatened by the use of intelligence artificial: if artificial intelligence assesses, through large amounts of data, independently and without human intervention, whether someone is creditworthy or not, there is a risk of perpetuating and reinforcing inequality. Just like the “wrong” zip code, the “wrong” nationality.
The algorithm error
The Belgian professor cites the case of a black American who was arrested without explanation because a facial recognition algorithm recognized him by mistake.
Harari, in turn, argues that in the future civilization will be created and spread increasingly by non-human intelligence. Algorithms will not only develop financial tools, but also scientific theories, music, films, and they will be the ones who will disseminate our cultural assets on large platforms.
In the future, different parts of humanity will be trapped in different information “cocoons.” Technology should not be taken for granted in a democratic society, but people must have influence on the technological future of the planet. As in “The Judgment”, everything is fluid, indeterminate, vague and at the same time threatening and dangerous. Nothing is fixed, concrete and clear. Judges are people lost in indeterminacy and fog. The “accused” does not see them, he only hears about them and what he hears is always frightening. He has to deal with mysterious figures, summoners, defendants and collaborators with a dubious justice system, all with dubious and shadowy responsibilities. Chaos contains a constant threat, a constant danger, and the treasurer feels powerless to deal with it.
If Kafka were alive today, he might write that artificial intelligence can be used in the same way by totalitarian regimes. After all, democracies are already very vulnerable at the moment…