Big Tech is Breaking Democracy. Treating Platforms as Publishers is Key to Stopping It
By Neil Brady
17th January 2025
Is Microsoft’s use of generative artificial intelligence to publish an image of a prominent Irish broadcaster, beside a statement that a ‘prominent Irish broadcaster faces trial over alleged sexual misconduct’ something it should be liable for publishing, as a newspaper is? Or is Apple’s use of it to report a suicide, and falsely attribute it to The BBC something it ought to be liable for? These questions can only be answered by answering another question first: are Microsoft and Apple ‘publishing’ when they do this?
As The Guardian’s Head of Innovation, Chris Moran, mused recently, “this fundamental question that platforms and tech companies are consistently wary of” is a key part of any discussion of the modern media landscape. But why? And where does defamation come into it?
A good place to start is the 1983 case of Hustler Magazine v. Falwell. That year, the pornographic magazine Hustler published a satirical advertisement involving the American Baptist pastor, Jerry Falwell, portraying him as committing drunken incest with his mother in an outhouse. The ad also contained a disclaimer which stated "ad parody - not to be taken seriously." Falwell sued Hustler for intentional infliction of emotional distress, libel, and invasion of privacy, winning twice in the lower courts on the first charge, before Hustler owner Larry Flynt appealed the case to The Supreme Court in 1988.
In Hustler Magazine v. Falwell, and famously dramatised in the 1996 film The People Versus Larry Flynt, the Court overturned previous rulings, and found unanimously in Flynt’s favour. In its reasoning, the court explained its judgement that freedom of speech is vital to “matters of public interest and concern”, and such satirical critique of public figures in particular, due to their being “intimately involved in the resolution of important public questions.”
However the finding was subject to the vital caveat that it only applied so long as the speech could not reasonably be construed as claiming a false statement of fact. In other words, defaming.
Were that advertisement to be published today on any internet platform, this finding would apply, but only to the third party, and not the platform or company itself. Elon Musk can be sued, as he was, for Tweeting ‘sorry pedo guy’ but, unlike Hustler, not Twitter Inc (now X Corp.).
This is because in the late 1990’s, and in order to support the then burgeoning commercialisation of two new technologies - ‘the internet’ and ‘the World Wide Web’ - a distinction was drawn between ‘intermediary platforms’ such as Wikipedia, and ‘publishers’ like Hustler. These ‘platforms’ were to be governed by ‘safe harbour’, a type of law which presumes compliance if minimum, easily met standards are upheld.
Although this situation has pertained for almost three decades, and the Hustler case raises comparatively quaint issues, how this paradigm continues to drive much online dysfunctionality is not well understood, and has confused perceptions of what is permitted under the law.
This was well illustrated in 2023 in The United States, by the Supreme Court case of Gonzalez v. Google LLC. Taken by the family of 23 year old Nohemi Gonzalez, a student murdered in Paris in 2015 by Islamic State (IS), it was argued that YouTube’s algorithm’s recommendation of IS propaganda rendered it partly responsible for her death and therefore a violation of The Anti-terrorism Act. As the Washington Post’s Will Oremus noted at the time, ‘the technology giants are being sued under the Anti terrorism Act…yet the arguments in Gonzalez v. Google keep coming back to hypotheticals about defamation law.’
Legally, if Apple, Microsoft or their ilk were to be found guilty of such a charge, it would raise the question, ‘are they publishers?’ And if they’re publishers, are they then subject to the caveat in Hustler Magazine v. Falwell and therefore liable for defamation? Ultimately, finding itself unable to decide (during oral hearing, Justice Elena Kagan noted ‘we really don't know about these things…these are not like the nine greatest experts on the internet’), the court issued a per curium decision and sent it back to the lower courts. Fudged it, in short.
In Europe, policy makers have made more progress, by tightening safe harbour law with the passing of The Digital Services Act (DSA). While this will undoubtedly move the needle, it fundamentally preserves the status quo, and it principally targets the company coffers and algorithmic amplification. Algorithmic virality, while important, only counts for so much. As for the coffers, as a well known senior technology executive said to me once, “fines? We have so much money, we can absorb any fine.”
Now though, generative artificial intelligence (AI) is bringing these matters to a head, by forcing a question: ‘can an AI, amongst other things, defame?’. And broadly, the consensus is that it can. As Justice Neil Gorsuch noted in Gonzalez v. Google LLC, ‘[AI] generates polemics today that...goes beyond picking, choosing, analysing or digesting content…that is not protected [speech]'.
Tech companies will argue that their terms and conditions explicitly place responsibility on users, or that the outputs shouldn't be taken too seriously, as OpenAI has done in Georgia, where it is being sued for defamation. Similarly, Apple has suggested it is up to users to understand the fundamental limitations of generative artificial intelligence. But this is the problem with the minimum standard of safe harbour; it incentivises the bare minimum.
Again to quote Justice Kagan, “Every other industry has to internalise the costs of its conduct. Why is it that the tech industry gets a pass?”.
Nobody is arguing for a ban on artificial intelligence. It holds all sorts of potential, and it can’t be put back in the box anyway. Nor is this about the justified concerns journalists in particular often have, that defamation can stifle free speech and protect the interests of the rich and powerful (just ask the Gonzalez family). It is about technology companies unjustly benefitting from a long-standing legal anomaly that fuels a completely chaotic media dynamic, and that is corroding global democracy, one day at a time.
Does anyone seriously doubt that Tim Cook or Satya Nadella would deploy such patently deficient technology in this way, if their respective companies were liable for the reasonably foreseeable misinformation published?
The form technological advancement takes is not fated, but the product of choice. As the ripple effect of the generative AI wave continues, holding out the dual promise of both jobs and increased joblessness, and as democracy continues to strain under the current liability regime, policymakers must act. It is now becoming clear to citizenries everywhere, they can have ‘Big Tech’ or democracy, but they cannot have both.