By MARK PEARSON Follow @Journlaw
What if someone posted a comment to your Facebook fan page at 5.15pm on a Friday alleging a leading businessman in your community was a paedophile?
How long would it be before someone noticed it? Immediately? Perhaps 9am Monday?
I put this question to a group of suburban newspaper journalists recently, expecting most would not be checking their newspapers’ Facebook pages over the weekend.
I guessed right, but I was amazed when one replied that such a comment would have remained there for the three months since he last looked at his company’s fan page.
Facebook fan pages are a legal time bomb for corporations, particularly in Australia where the courts have yet to rule definitively on the owner’s liability for the comments of others.
Justice Finkelstein’s ruled that in a consumer law case a company would have to take reasonable steps to remove misleading and deceptive comments of others from their Facebook fan pages (and Twitter feeds) the instant they had been brought to their attention.
A more recent Federal Court case examined moderated comments on a newspaper’s website in the context of a racial discrimination claim.
In Clarke v. Nationwide News, Justice Michael Barker ordered the publishers of the Perth Now website to pay $12,000 to the mother of three indigenous boys who died after crashing a stolen car and to take down the racist comments about them from readers that had triggered the claim.
Central to the case was the fact that the newspaper employed an experienced journalist to moderate the comments on its site, meaning that it had taken on responsibility as ‘publisher’ of the comments. (The newspaper managing editor’s explanation of the moderation system at paras 170-178 makes for interesting reading too).
Justice Barker distinguished situations where the editors actively moderated readers’ comments from those where they did not (para 110), but restricted that distinction to the operation of s. 18C of the Racial Discrimination Act, which requires the “offensive behavior” to have been “because of the race, colour or national or ethnic origin”.
Unmoderated comments fall outside this because it cannot be proven the publisher shares the commenter’s racist motivation unless the publisher refuses to take down the comments once this has been brought to their attention.
Justice Barker stated:
“If the respondent publishes a comment which itself offends s18C, where the respondent has “moderated” the comment through a vetting process, for example, in order not to offend the general law (or to meet other media standards), then the offence will be given as much by the respondent in publishing the offensive comment as by the original author in writing it.
“In such circumstances, it will be no defence for the respondent media outlet to say, ‘But we only published what the reader sent us’.”
Some might read this to mean that it is safer to run all comments in an unmoderated form – just like a Facebook ‘fan’ page is structured – then take them down if you get a complaint.
Such an approach might sit okay with these decisions in consumer or racial discrimination law, but what happens when the time bomb lands – a shocking defamation imputation, a heinous allegation damaging a forthcoming trial, or the breach of a court order or publication restriction like the naming of a rape victim?
Defamation and contempt are matters of ‘strict liability’, where you might be liable even if you are ignorant of the defamatory or contemptuous content you are publishing. The only intent required is that you intended to publish your publication or were ‘reckless’ in the publishing of the material. And neither has offered protection for publishers providing a forum for the comments of others.
Which brings us back to the question at the very start. If the Federal Court has ruled you should remove unmoderated material breaching consumer or race law within a reasonable time of becoming aware of it, what will courts deem a ‘reasonable time’ for a serious allegation of child molestation about a prominent citizen to remain on a publisher’s Facebook fan page?
If the allegation were about me, I certainly wouldn’t want it remaining there over a weekend. Or even five minutes. Any period of time would be unreasonable for such a dreadful slur.
The High Court established 10 years ago in the Gutnick case that a publisher is responsible for defamation wherever their material is downloaded. As The Age revealed in 2010, a blogger using the pseudonym ‘witch’ launched a series of attacks on a stockmarket forum about technology security company Datamotion Asia Pacific Ltd and its Perth-based chairman and managing director, Ronald Moir. A court ordered the forum host HotCopper to hand over the blogger’s details which could only be traced to an interstate escort service. But private investigation by the plaintiff’s law firm eventually found the true author of the postings on the other side of the nation who was then hit with a $30,000 defamation settlement.
And what if it is a litany of allegations about the accused in an upcoming criminal trial? I have blogged previously about the awkward position the Queensland Police face with their very successful Facebook fan page when citizens comment prejudicially about the arrest of an accused in a criminal case. No matter how well those fan page comments are moderated by police media personnel, they could never keep pace with the prejudicial avalanche of material posted on the arrest of a suspect in a high profile paedophilia case.
That leads to the awkward situation of the key prosecutor of a crime hosting – albeit temporarily – sub judice material on their own site. It can’t be long before defence lawyers use this as a reason to quash a conviction.
The situation is different in many other countries – particularly in the US where s. 230 of the Communication Decency Act gives full protection to ‘interactive computer services’, even protecting blog hosts from liability for comments by users.
Much has changed in the three decades since I had my first letter to the editor published by the Sydney Morning Herald as an 18-year-old student.
I can clearly recall that newspaper’s letters editor phoning me in my suburban Sydney home to check that I really was the author of the letter and that I agreed with his minor edits. No doubt he then initialled the relevant columns in the official letters log – the standard practice that continues in some newspaper newsrooms today.
But all that caution has been abandoned in the race for relevance in the digital and Web 2.0 eras.
First, it was news organisations’ websites allowing live comments from readers – still largely moderated. For a while, most insisted on identification details from their correspondents.
Next came their publication in hard copy of SMS messages received in response to their stories. My local newspaper – the Gold Coast Bulletin – sometimes publishes several pages of such short texts from readers using witty pseudonyms.
And now we have the Facebook fan pages, where the technology does not allow the pre-moderation of the comments of others. You need to have that facility switched completely ‘on’ or ‘off’ – and it defeats the purpose of engaging with readers for a media organisation to turn off the debate. I can post a Facebook comment from an Internet café under the name ‘Poison Pen’ and it may well be vetted by nobody.
The whole issue is symptomatic of the social media challenges facing both the traditional media and the courts.
Meanwhile, expect to wait a while to see your comments to this blog published. I’ve elected for full moderation of all comments, and have already rejected a couple that seem to leave me exposed as publisher. You can’t be too cautious now, can you?
© Mark Pearson 2012
Disclaimer: While I write about media law and ethics, nothing here should be construed as legal advice. I am an academic, not a lawyer. My only advice is that you consult a lawyer before taking any legal risks.