LE SOIR

AI invites a break with the Gafam business model

As a counterpoint to the theses of Silicon Valley, economist Vincent Lorphelin proposes the notion of "liquid property", where the working capital of the economy is no longer money, but the circulation of intellectual property rights. A way out of the tug-of-war between "pro-culture" and "pro-innovation". While serving the European social economy model.

On the one hand, the sacrosanct copyright, whose contents massively feed generative artificial intelligences. On the other, the right to data mining, i.e. the right given to AI tool publishers to draw on third-party data to train their AI models.

In between lies the zone. Where legal warfare is raging, as in the case of the New York Times' lawsuit against OpenAI, the publisher of ChatGPT. And political too, in an attempt to reconcile intellectual property and innovation.

What's the way out? Economist Vincent Lorphelin, co-chairman of the Institut de l'Iconomie and founder of Controv3rse, a think-tank and study group on the digital economy, has co-piloted a thought-provoking and disruptive report on the issue. The report was commissioned by the French National Assembly, but above all aims to provide a "counterpoint" to Silicon Valley's dominant theses. And this "at a time when very important decisions are being taken in Europe, or when AI is a guest at the Davos Forum.

Vincent Lorphelin questions the value of human labor, intellectual property rights, the European social economy model (versus the neoliberal market economy model) and the creative industry value chain. "We used to think that human creativity was an insurmountable bastion for the machine, but now, in view of what generative AI enables, we're beginning to doubt it. The way out may not be legal, but economic", says the researcher.

Philippe LALOUX : Copyright served as the basis for the thinking set out in your report...

Vincent LORPHELIN : If we look at generative intelligence from the angle of law and authors' rights, we need to answer three questions. Do generative AIs train with copyrighted works? The answer is yes, we know it, and it's massive. Two: is it legal? We don't know yet. There are going to be ten to fifteen years of American-style litigation. And, in the end, the issue will not be legal but political. This raises a third question: what is in the public interest? And here, the debate has not yet begun.

Doesn't the AI Act provide the beginnings of an answer?

The AI Act includes a specific clause on the transparency of generative AI sources. This clause confirms the answer to the first question: yes, generative AIs draw massively on content protected by intellectual property. But not the second.

In fact, it's not even clear that source transparency is absolutely necessary. The New York Times, for example, managed to frame ChatGPT to prove that Open AI had used its content. For their part, generative AI publishers also insist that full transparency on the list of content scraped from the web would force them to hand over their trade secrets. And yet, 95% of the value of their products lies in this know-how.

In short, for companies innovating with AI, it's a question of survival. And for the authors, who say that generative AI is stealing their jobs, it's an existential problem. On both sides, the harm is almost certain. As a result, it won't be clear until we've answered the third question, that of the general interest. At this stage, the AI Act fails to do this.

How do we answer this question?

Sam Altman and Elon Musk have their answer. What do they say: generative AI will make work obsolete, so we need to prepare for universal income. This coincides with the thesis of the end of work relayed by Jeremy Rifkin a few years ago. It also ties in with the Singularity thesis, which argues that within a few years (we can fight over the date), general AI will overtake humans in every field. The proof is that it has always been said that AI will never go into the creative professions (it will never be an author, a draughtsman...). But since last year, we're there. The gates of the bastion have been breached. In parallel, from an economic point of view, the economy of platforms such as Gafam has become the dominant form of the market economy. Except that it presents a major bias: it concentrates wealth to the benefit of the platform operator, to the detriment of contributors.

The economic value of work is no longer protected by know-how, nor by a value that is not fungible in the economy.

So, is this effectively the end of human work?

Well, if you think of work as a form of servitude, that's rather good news. In that case, Sam Altman's idea of a universal income is more in the general interest. It makes sense.

But...

But yes, of course, there's a counter-model: generative AI can be a tool in the service of creativity, not necessarily in competition with human intelligence. It could also be said that platforms don't necessarily lead to the concentration of wealth. Take collective management organizations like Sacem (or Sabam or SACD in Belgium, editor's note). They can be seen as platforms, bringing together musicians on the one hand and music promoters on the other. Their fundamental difference with digital platforms is that they manage rights, which aren't exactly commodities. The proof is that in the contracts, the unit of value of the rights is not a monetary unit, euros, but a unit expressed in percentages. The artist is remunerated according to the amount collected from the exploitation of these rights.

As this is not a commodity, it implies special governance. Collective management organizations are governed by the authors themselves. Their mission is to manage something other than merchandise on behalf of the authors. This ties in with the thesis of the decentralization of digital platforms.

Because, you say, the centralized models of Gafam lead to "subculture"...

The centralized model leads to subculture, because in this scenario, generative AIs can no longer feed on copyright and feed on themselves. We call this the model collapse: at a certain point, culture and creativity feed on themselves. And there's no way out. [see more details herebelow : Generative AI and copyright: "artificial culture must not follow the sad path of junk food"]

And how can this decentralized platform counter-model answer the question of the general interest of AI?

Firstly, this counter-model is not marginal: these collective management organizations are worth twelve billion dollars worldwide. Secondly, current events give us an example: Getty Images, one of the world's largest image banks (which has filed a complaint against Stability AI for plundering its content, editor's note) is launching its own generative AI to compete with the leaders. Its particularity is to guarantee the absence of any copyright infringement, while remunerating the authors whose works are used to train its tools.

This invites us to re-evaluate the value chain. Getty, which basically bought and sold photos, will now sell images produced by its AI, to which millions of images have contributed. Thanks to AI, we can see this shift from a centralized to a decentralized model. Getty remunerates the authors of the primary images, with an algorithm that calculates a pro rata share of the contribution.

Does Getty Images herald a real break with the past?

This movement is set to accelerate, as copyright represents a real uncertainty for operators of generative AI, which is a deterrent for investors. Even OpenAI has entered into this logic: if they defend themselves on the legal front, on the business side, they'll go looking for copyrights themselves. And for big tech, it's a question of image (just think of Apple, which has made privacy protection a major marketing argument).

So there are two pistons at work to push the shift: on the one hand, uncertainty about rights, and on the other, image issues. Remember the Napster wave and the start of mass music piracy? It was the end of musical creation because there was no longer any economic model. Then came iTunes and Spotify. And a new wave overtook the first. The wager of this scenario is that, for AI, we're going to experience the same thing: the new wave will invent a decentralized model that will value rather than devalue creativity and work.

You speak of "liquid property"...

Imagine if this decentralized platform model became systemic, in the same way as centralized platforms like Gafam do today. In that case, the material manipulated would be intellectual rights, not commodities. It's non-market economic value. The economist Karl Polanyi defended the idea that the market economy has commodified work, whereas work cannot be equated with a commodity.

Centralized platforms, on the other hand, will do their utmost to give a market value to intellectual rights. In a decentralized logic, rights are expressed in percentages, not in money. The economic values manipulated are no longer expressed in euros or dollars. It's an economy based not on money, but on property. To draw a parallel with what Emile Zola called “liquid money” (with the transition from an economy where wealth no longer came from land ownership but from the circulation of money), we propose the notion of "liquid property", where the working capital of the economy is the circulation of intellectual property rights.

Today, everything is so financialized that a company presents its accounts as if it were for sale every day, because it measures everything in money. But there are intangible, off-balance-sheet assets that express values that cannot be expressed in money. A company's wealth goes beyond what money can express. As soon as we start handling liquid properties, we have to find other indicators. Gradually, we're moving towards a society that makes a different kind of wealth solvent. Granted, these are the visions of a long-term economist, but we can see that AI is counter-intuitively shifting economic models towards this vision.

 

So, strategically, Europe should choose this path, this counter-model to the one defended by Sam Altman?

Absolutely. Our DNA is the social market economy, the economy that makes society. Work makes sense. Historically and strategically, collective rights management is twice as developed in Europe as in the United States. So, if we return to the fundamental question of the general interest, to counterbalance the thesis of singularity, the European general interest is to promote the decentralized model. And therefore to limit the right to search, as this is the subject of a major legal question mark. So we need to take a political stance. For example, we need to limit the right to dig to research and prohibit it as soon as commercial use is involved.

We need total transparency on sources so that we can remunerate authors in proportion to their contribution to the content generated, even if there are millions of them. This is what Getty does. So we have the technical means to provide transparency to remunerate authors, with a trusted third-party, without disrupting innovation.

Where, with the AI Act, we entered into a trench war between the "pro culture" and the "pro innovation", we can get out of it thanks to the creation of a link where each camp will be brought to collaborate.

So regulation isn't a panacea?

We've reached a fundamental philosophical moment that really raises questions about the kind of society we want. Caricaturally, we've always said that the United States invents, China manufactures and Europe regulates. We're moving towards a vision in which the counterweight to the unbridled neoliberal market economy model lies elsewhere than in regulation. It's not legal, it's economic.