Silicon Valley speaks of artificial “superintelligence,” fueling the idea of machines’ superiority over humans. This semantic shift, which has no scientific foundation, could foster systemic discrimination against humanity, warn Vincent Lorphelin and Laurence Devillers.
Before the war, the study of “human races” was a science like any other. It was only in 1950 that UNESCO decided to begin dismantling it. After observing that interpretations of differences between “races” had been “to a very large extent influenced by our prejudices,” its text made a single recommendation, simple—even simplistic: “the serious errors caused by the use of the word ‘race’ in everyday language make it desirable that we abandon this term altogether when applying it to the human species and adopt instead the expression ‘ethnic groups’.” At the time, how could anyone imagine that changing a word would match the magnitude of the challenge? Yet this lexical break was the first step toward dismantling scientific racism.
Today, it is the science of artificial intelligence that must be questioned. For the creator of ChatGPT, Sam Altman: “ChatGPT is already more powerful than any human who has ever lived […] We do not know how far we can go beyond human intelligence.” Eric Schmidt, former CEO of Google, claims that AI will be “as intelligent as the most intelligent mathematician, physicist, artist, writer, thinker or politician […] computers will be more intelligent than the sum of humans.” Elon Musk calculates that “human intelligence will be less than 1% of all intelligence.” Mark Zuckerberg concludes that “the development of superintelligence is now in sight.” The consensus that AI is on the verge of surpassing human beings in all cognitive tasks established itself over the summer, the product of technological enthusiasm bordering on fantasy, fueled by well-rehearsed marketing storytelling.
A moral error arises when science becomes contaminated by prejudice. Yesterday, it was supposedly natural racial hierarchies. Today, it is the idea that human intelligence can be entirely reproduced by machines. From chess to Jeopardy!, from the game of Go to art competitions, nothing seems able to stop AI. Even ethics itself is now the subject of algorithms designed to “align” AI with human values. These advances feed the illusion of superiority that will eventually justify mass unemployment, an unprecedented concentration of wealth for the benefit of Big Tech, and dependence on algorithms.
A mere accusation of bad faith? No: Brad Smith, president of Microsoft, already wants to grant rights to machines: “We all have the right, under copyright law, to read and learn,” he says. “We now ask whether we can allow machines to learn in the same way. I think there is a societal imperative to make this possible.” An American judge adds: “The authors’ complaint sounds like that of people who would complain that teaching students to write well would lead to an explosion of works competing with their own.” Donald Trump gives them political blessing: “You can’t expect an AI program to perform well if every article, book, or anything else you’ve read or studied is supposed to cost you money.” Together they merely rationalize the prejudice of equivalence between intelligences to justify the massive plundering of one by the other.
UNESCO was right in 1950: everything begins with words. The phrase “artificial intelligence” is a convenient anthropomorphism that helped win over the general public. But that success is a trap. The word “intelligence” is now used indiscriminately for both humans and machines, which fuels the prejudice.
A clear boundary must be restored. Our intelligence distinguishes the novel from experience, spectacle from real life, the map from the territory, science from reality, statistics from meaning. A machine, however powerful it may be, has no understanding of the world beyond its computer representations. Our intelligence possesses a bodily pre-science: feelings of gratitude, jubilation, or trust during a “cognitive resonance”—that is, when an event confirms our representations; conversely, feelings of anger, shame, or dismay alert us to cognitive dissonance. The machine senses nothing, lacking a body. Our intelligence is equipped with a thousand sensors: allowing a diplomat to hear a threat behind the unspoken part of an innocuous sentence; a journalist with a critical eye to sense that a detail is revealing; a doctor to be alerted by secondary signs beyond the health report; and the authors of this column to have first felt that something was wrong in the discourse about AI. A machine functions only from explicit data. The irreducible part of our intelligence allows us, in daily life, to grasp a reality far greater than the shadow theatre in which AI is permanently confined.
The summer of 2025 marks a turning point: Silicon Valley has crossed the Rubicon by erecting “superintelligence”—a word that concentrates confusion and hierarchy—into an indisputable fact (though they deny doing so), even though no refutable scientific test exists. Like the word “race” yesterday, the word “intelligence” has just moved from being a tool of knowledge to becoming an instrument of techno-political power. This shift is not scientific progress; it is a moral error.
We therefore propose a simple ethical principle: any leader who uses the word “intelligence” to compare machine and human must explicitly recall the irreducible part of human intelligence and its superior usefulness for the common good. Or change their vocabulary. Failing to do so amounts to endorsing a new pseudo-science that, like scientific racism yesterday, will tomorrow feed systemic discrimination against humanity. One does not trifle with science, gentlemen gurus!