Google wins online defamation case: not a 'publisher' of website content

In an eminently sensible decision, Eady J has held that Google is not liable for defamatory material that appears in the extract displayed underneath search results.

Stated more precisely, Metropolitan International Schools Ltd v DesignTechnica Corporation [2009] EWHC 1765 (QB) now stands for the proposition that the facilitator of a defamatory imputation who, without human input or authorisation, causes that imputation to be conveyed without knowledge of its character, cannot be characterised as a publisher at common law. The case revolved around a forum thread (I do not endorse its contents) hosted on the first defendant’s website. That thread contained posts made by users alleging that the plaintiff’s distance learning courses were ‘nothing more than a scam’. The thread ranked highly (3rd and 4th) on searches for the plaintiff via Google.co.uk and Google.com. As is Google’s practice, the search results included extracts of matching text, including ‘Train2Game new SCAM for Scheidegger’ (you can see it here). The plaintiff argued that this was defamatory and commenced proceedings against DesignTechnica, Google UK and Google Inc.

Eady J held that Google Inc could not be liable, even assuming that the comment was defamatory, and accordingly ordered that an earlier decision allowing service outside the jurisdiction be set aside:

[48] I turn to what seems to me to be the central point in the present application; namely, whether [Google Inc] is to be regarded as a publisher of the words complained of at all. The matter is so far undecided in any judicial authority and the statutory wording of the [Defamation Act 1996 (UK)] does nothing to assist. It is necessary to see how the relatively recent concept of a search engine can be made to fit into the traditional legal framework …

[49] When a search is carried out by a web user via the Google search engine it is clear … that there is no human input from [Google Inc]. … It is performed automatically in accordance with computer programmes. … It is fundamentally important to have in mind that [Google Inc] has no role to play in formulating the search terms. Accordingly, it could not prevent the snippet appearing in response to the user’s request unless it has taken some positive step in advance. There being no input from the Third Defendant, therefore, on the scenario I have so far posited, it cannot be characterised as a publisher at common law. It has not authorised or caused the snippet to appear on the user’s screen in any meaningful sense. It has merely, by the provision of its search service, played the role of a facilitator.

For more details about this very interesting case, please continue reading.

Analysis of the decision

The judgment considers the three major issues: (1) whether Google is liable for authoring and publishing the defamatory comments; (2) whether Google can be liable for acquiescing in their publication; and (3) in any case, whether Google would be protected by the innocent dissemination defence.

Eady J begins by comparing Google to the position of a compiler of a library catalogue, who may face liability under the repetition rule if the catalogue actually sets out tortious content from books and is subsequently consulted by somebody. The difference between such a compiler and Google, according to his Honour, is that a search engine does not consciously choose the text to be displayed in the catalogue: there is ‘no intervention on the part of any human agent. It has all been done by the web-crawling “robots”.’ The automatic nature of Google’s indexing and content extraction processes is what separates it from the librarian.

The next issue was whether Google’s liability was enlarged once it had received notice of the defamatory content. The plaintiff argued that this moment of realisation was critical: once Google knew of defamatory character of a result snippet and failed to block access, it should be considered liable by acquiescence. (This is similar to secondary or authorisation liability for copyright infringement, where knowledge is often an important — though not decisive — element.) A web host who conveys defamatory material, so the argument goes, will be a joint tortfeasor if it continues to authorise publication of the material after becoming aware of it. Eady J rejected this argument:

[55] A search engine, however, is a different kind of Internet intermediary. It is not possible to draw a complete analogy with a website host. One cannot merely press a button to ensure that the offending words will never reappear on a Google search snippet: there is no control over the search terms typed in by future users. If the words are thrown up in response to a future search, it would by no means follow that [Google Inc] has authorised or acquiesced in that process. …

[56] There are some steps that [Google] can take and they have been … described as its ‘take down’ policy. There is a degree of international recognition that the operators of search engines should put in place such a system … to take account of legitimate complaints about legally objectionable material. It is by no means easy to arrive at an overall conclusion that is satisfactory from all points of view. In particular, the material may be objectionable under the domestic law of one jurisdiction while being regarded as legitimate in others.

[57] In this case, the evidence shows that Google has taken steps to ensure that certain identified URLs are blocked, in the sense that when web-crawling takes place, the content of such URLs will not be displayed in response to Google searches carried out on Google.co.uk. This has now happened in relation to the ‘scam’ material on many occasions. But I am told that [Google] needs to have specific URLs identified and is not in a position to put in place a more effective block on the specific words complained of without, at the same time, blocking a huge amount of other material which might contain some of the individual words comprising the offending snippet.

[58] It may well be that [Google’s] ‘notice and take down’ procedure has not operated as rapidly as [the plaintiff] would wish, but it does not follow as a matter of law that between notification and ‘take down’ [Google] becomes or remains liable as a publisher of the offending material. While efforts are being made to achieve a ‘take down’ in relation a particular URL, it is hardly possible to fix [Google Inc] with liability on the basis of authorisation, approval or acquiescence.

In other words, even though Google’s take down process is slow and ultimately ineffective for plaintiffs, it won’t be fixed with secondary liability for defamation. It would be ‘unrealistic’ to attribute to Google responsibility for publication, ‘whether on the basis of authorship or acquiescence’: at [59]. Google win.

Search engines as ‘mere conduits’

His Honour’s comments must, of course, be taken in context. This was an interlocutory challenge to jurisdiction, not a trial on the merits (Google Inc had argued that England was not the proper forum and suit should be brought, if anywhere, in California, since it was domiciled there and didn’t have any assets in England). Google UK had already taken steps to filter the results in England. Google had acted on the plaintiff’s request within the limits of what was technically feasible. Google was not complicit in publication.

Google’s reaction was unsurprising: search engines are mere conduits. According to the company, the decision ‘reinforces the principle that search engines are not responsible for content that is published on third-party Web sites’. Plaintiffs ‘should address their complaint to the person who actually wrote and published the material, and not a search engine, which simply provides a searchable index of content on the Internet.’

However, the decision may be bad news for other online intermediaries, such as those who do exercise some human intervention in published content (eg, Wikipedia, online fora, blog comments). Because those parties have editorial control over user-generated content, they would be closer to librarians than electronic automata under Eady J’s analysis. Moreover, although the outcome in Metropolitan is manifestly correct, it is unclear why human intervention should define the limits of the tort of defamation: wouldn’t a program that could automatically detect and collate defamatory materials be just as (if not more) mischievous than a manual compilation of the same materials? Arguably, this distinction introduces (at least partially) a new mental element that discourages content moderation and incentivises automated data mining.

An interesting (and unresolved) question is whether the same result would be reached in Australia. Here, the innocent dissemination defence applies where an intermediary (such as Google) is a ‘subordinate’ distributor of content that it does not know (and has no grounds for suspecting) is defamatory, and that ignorance is not caused by the intermediary’s negligence: Thompson v Australian Capital Television Pty Ltd (1996) 186 CLR 574. The defence would probably apply in this case, at least until Google could be fixed with knowledge. However, that analysis still raises some questions — most troublingly, what happens after that point: if the only way an intermediary can avoid liability is to censor allegedly tortious content at the behest of a private and interested party, there are clear risks to online freedom of expression.

Comments

Similar issues originally

Similar issues originally were raised, as long as a decade ago, with the sale of Nazi literature and memorabilia on eBay and Amazon, which is legal in the US but illegal in Germany. In those cases, the companies had to modify their rules and offerings to accommodate German law.