Last Thursday afternoon, Google’s AI search assistant Gemini reported that up to ten billion euros in healthcare cuts are being made. It based its claim on sources which include NOS, De Correspondent, RTL and De Telegraaf. But as of Thursday afternoon, no such cuts had been announced.
There’s only speculation. Political correspondent Joost Vullings of news and current affairs programme EenVandaag explains how negotiations might go: if the coalition wants to cut five billion from healthcare, you simply start with ten billion and meet the opposition halfway in subsequent talks.
This blunder by Gemini illustrates the unreliability of AI. And yet, readers trust the quick summaries. They’re clicking through to the original source less often, research by cybersecurity firm Cloudflare shows. Around the same time, Pew Research also saw a halving of click-through rates.
‘Deeply concerned’
In November, virtually all Dutch media sent an urgent letter to the parties in coalition talks, expressing their concerns about the power and influence of AI companies. The media are at risk of being consumed by Gemini and all the other artificial summarisers, prompting media companies to ask the government for protection.
Although higher education media did not sign this letter, they have precisely the same concerns, says Willem Andrée, editor-in-chief of Resource at Wageningen University & Research. He’s seeing fewer visitors to the Resource site and partly attributes this to Google’s AI answers. “I’m deeply concerned about this.”
Other education publications are also noticing readers disappear. Marieke Verbiesen, editor-in-chief of Cursor at Eindhoven University of Technology, asked for the drop in traffic to be investigated by her hosting company. They also thought it possible that AI summaries are deterring readers from visiting the original source.
Fact-finding
Media outlets are reluctant to disclose exactly how dramatic the situation is, but it’s clear they feel threatened. AI companies use media content as a source while at the same time keeping users away from those same media. “Together, this causes news organisations to lose reach and revenue on a large scale”, they write in the letter.
As a university magazine, Willem Andrée’s publication doesn’t rely so much on advertising, but he does want his journalistic productions to be read. “It’s a real shame that we’re losing readers to AI. Moreover, you want people to get their information from the original source, irrespective of whether it generates revenue. AI is undermining that principle.”
Paywall
Media companies are asking the Dutch government to translate European AI rules into strict domestic legislation to combat ‘illegal scraping’. Some AI companies, it seems, are even pulling articles from behind paywalls. Or content that the author has explicitly said should not be included in AI training datasets.
The New York Times also noted this, taking legal action against a number of big AI companies. Chatbots sometimes quote verbatim from Times articles that are behind subscriber paywalls, as a consequence of which AI firms risk eroding a key pillar of democracy, the newspaper claims.
That fear is also present amongst Dutch media, as the letter makes clear. Because of the influence of big tech companies, society is losing control over a “fact-finding oriented and pluralistic provision of information”. And: “The more journalism is curtailed, the less insight we have into what is actually occurring in our country and around the world.”
No solution yet
The Dutch House of Representatives asked outgoing minister Gouke Moes, whose portfolio includes education and media, for answers. In his response last week, Moes made clear that he doesn’t have a solution to this fundamental problem yet.
He places part of the responsibility with the public, who must use “media literacy” to shield themselves against the effects of “disinformation”, and refers to a 2024 report by the Netherlands Scientific Council for Government Policy (WRR) on the relationship between media and Big Tech.
But disinformation isn’t the problem that media highlight, nor is media literacy the solution, as confirmed by the WRR in its report. The WRR writes that media literacy will, with the rise of generative AI, “increasingly offer diminishing recourse”, because “real and fake are already hardly distinguishable”.
Protection
Media can take some defensive steps. Software exists that keeps AI crawlers out. Cloudflare offers this free to journalistic media and there seems to be interest in the Netherlands. But Cloudflare is a commercial company, and with tech companies it’s always unclear how long free remains free.
Another option is to set up robust paywalls or to avoid Google entirely, says Willem Andrée. But can new readers still find you then?
Andrée thinks regulation will ultimately be necessary. “In my view, that European legislation should be introduced here quickly. Things are changing so fast. The time for standing idly by has passed.”