Tag Archives: racism

Queensland’s biggest publisher – the police – try to calm the FB lynch mob

By MARK PEARSON

The resources of the Queensland Police Service Facebook fan page were stretched over the past 24 hours to cope with the public response to their announcement of an arrest of a suspect in one of Brisbane’s most compelling ‘whodunnit’ murder mysteries.

Mainstream and social media speculation about the case has been rampant since 43-year-old Allison Baden-Clay went missing on April 23. Her husband Gerard appeared in court today charged with her murder.

As I have blogged previously, the Queensland Police Service has a highly successful Facebook page which established the bulk of its 289,500 fan base during the devastating Brisbane floods in January last year. It proved an excellent community communication tool during the disaster and since then as a crime detection aid as the public volunteer leads on unsolved crimes and public safety.

But the challenge comes when Police Media announce on their Facebook page the apprehension of a suspect in a high profile case.

The problem with Facebook fan pages is that you must have the ‘comment’ function turned completely ‘on’ or ‘off’ – so the best the police can do is monitor the feed and remove offensive or prejudicial material after it has been posted.

That might be fine during an uneventful day when the police social media team can keep on top of the message flow – but when an arrest has been made in an emotion-charged crime like a murder or a child sex attack many fans want to ‘vent’.

That’s what happened with the arrest of a suspect in the murder of Sunshine Coast teenager Daniel Morcombe last August.

It happened again last night and today as, within 21 hours, more than 500 fans commented on the Police Media announcement that Baden-Clay had been charged with his wife’s murder and more than 1,500 ‘liked’ the announcement. Those 506 comments were the ones that survived the post-publication moderation process where officers in the social media unit trawl through the latest posts to delete the inappropriate ones.

The law of sub judice in Australia dictates that nothing can be published that might prejudice the trial of an accused after they have been arrested or charged. That includes any assumption of guilt (or even innocence), evidentiary material, theories about the crime, witness statements, prior convictions or character material about the accused. It even bans visual identification of the accused if that might be an issue in court. In a murder trial it usually is.

The penalty can be a criminal conviction on your record, a stiff fine and sometimes even a jail term for contempt of court.

Once the accused has appeared in court, journalists covering the matter are protected from both contempt and defamation action if they write a ‘fair and accurate’ report of the hearing, sticking to material stated in open court in the presence of the jury – if there is one.

It’s hard enough for reporters to get their heads around these rules – let alone the Facebook fans posting their theories on a murder to the police Facebook page.

Even some of the posts that have survived the police editing process to date push the boundaries of acceptable commentary on a pending case.

One stands out: “Ann Gray: Took long enough. It was obvious that he did it. Hope he rots in jail.”

That was six hours after the announcement, and obviously the moderators were running short on patience with their ‘fans’. The moderators took to calling those speculating on the crime “Facebook detectives”. One replied to Ms Gray: “Queensland Police Service: Ann Gray *sigh* Really? The third detective we have commenting on here that does not comprehend what it takes? I suggest you don’t pass judgement on something that you know nothing about!”, and then “Queensland Police Service: I am not sure ‘because it is obvious’ is suffice (sic) evidence in court, Facebook detectives. It is a matter before the courts. Enough!”

They also tried with a standard warning to commenters that was pasted into the discussion on several occasions: “Facebookers who are just joining this post, please do not speculate on this matter. Any posts which do are deleted and those who continue will be banned from our FB page. Please respect our rules. Thanks.”

One fan – Bec Mooney – suggested the police disable their comments function if they were so concerned about offensive and prejudicial material appearing, to which the police replied: “Queensland Police Service Bec Mooney – WE CAN’T DISABLE COMMENTS. Take that issue to Facebook. Even if we could, it would contradict the idea of social media.”

Do I sense a little attitude here? Clearly, the officers were getting tired and frustrated in the midst of the onslaught of the ‘lynch mob’, but surely the correspondent Ms Mooney had a valid point.

As I blogged earlier this week, Australian courts have ruled that the hosts of such fan pages are legally responsible for the comments of others on their sites and must act within a reasonable time to remove illegal or actionable material.

But they haven’t yet had to rule on a serious sub judice matter – so the key question is: How long is it reasonable for a prejudicial statement like the ‘obvious he did it’ and ‘rot in hell’ comment to remain on a public law enforcement agency’s Facebook page? It had been there 15 hours when we took our screen shot and may well still be there when you are reading this.

These rules apply to the mainstream media, and the police fan page has been so successful that it is now Queensland’s biggest publisher on some counts. Its fan base outstrips the Courier-Mail’s circulation, which peaks at 255,000 on a Saturday. And that newspaper – Queensland’s biggest – has fewer than 20,000 fans on its Facebook page. The ABC has just 91,000 nationally.

They aren’t allowed to publish this kind of prejudicial material.

Surely the police have even less excuse for hosting such comments even for a moment. The Queensland Police Service is the arresting and prosecuting authority whose job is to preserve the integrity of the justice process.

I fear it will not be long before a savvy defence lawyer seizes the opportunity to use such prejudicial commentary as grounds for appeal – perhaps resulting in a trial being aborted at great public expense or even a verdict quashed. That would be the exact opposite of what most of these commenters and the police would want.

Social media is clearly a superb resource for police and other agencies to use to connect with their communities and to build public trust. But let’s get sensible with this.

Instead of boasting to the whole world about a high profile arrest like this one, surely the police can hold back and let the mainstream media publish their announcement just as they have done for decades. The message would still get out and at least they would not then have the headache of the avalanche of comments in response to this kind of PR announcement.

The police argue that disabling comments might “contradict the idea of social media”, but surely their hosting of prejudicial material – even for a short time – contradicts the valued right to a fair trial of those they have arrested.  

© Mark Pearson 2012

Disclaimer: While I write about media law and ethics, nothing here should be construed as legal advice. I am an academic, not a lawyer. My only advice is that you consult a lawyer before taking any legal risks.

3 Comments

Filed under Uncategorized

The liability time bomb of comments on your FB fan page #medialaw

By MARK PEARSON

What if someone posted a comment to your Facebook fan page at 5.15pm on a Friday alleging a leading businessman in your community was a paedophile?

How long would it be before someone noticed it? Immediately? Perhaps 9am Monday?

I put this question to a group of suburban newspaper journalists recently, expecting most would not be checking their newspapers’ Facebook pages over the weekend.

I guessed right, but I was amazed when one replied that such a comment would have remained there for the three months since he last looked at his company’s fan page.

Facebook fan pages are a legal time bomb for corporations, particularly in Australia where the courts have yet to rule definitively on the owner’s liability for the comments of others.

In an earlier blog I looked more closely at the decision of Federal Court Justice Ray Finkelstein in the Allergy Pathways case last year.

Justice Finkelstein’s ruled that in a consumer law case a company would have to take reasonable steps to remove misleading and deceptive comments of others from their Facebook fan pages (and Twitter feeds) the instant they had been brought to their attention.

A more recent Federal Court case examined moderated comments on a newspaper’s website in the context of a racial discrimination claim.

In Clarke v. Nationwide News, Justice Michael Barker ordered the publishers of the Perth Now website to pay $12,000 to the mother of three indigenous boys who died after crashing a stolen car and to take down the racist comments about them from readers that had triggered the claim.

Central to the case was the fact that the newspaper employed an experienced journalist to moderate the comments on its site, meaning that it had taken on responsibility as ‘publisher’ of the comments. (The newspaper managing editor’s explanation of the moderation system at paras 170-178 makes for interesting reading too).

Justice Barker distinguished situations where the editors actively moderated readers’ comments from those where they did not (para 110), but restricted that distinction to the operation of s. 18C of the Racial Discrimination Act, which requires the “offensive behavior” to have been “because of the race, colour or national or ethnic origin”.

Unmoderated comments fall outside this because it cannot be proven the publisher shares the commenter’s racist motivation unless the publisher refuses to take down the comments once this has been brought to their attention.

Justice Barker stated:

“If the respondent publishes a comment which itself offends s18C, where the respondent has “moderated” the comment through a vetting process, for example, in order not to offend the general law (or to meet other media standards), then the offence will be given as much by the respondent in publishing the offensive comment as by the original author in writing it.

“In such circumstances, it will be no defence for the respondent media outlet to say, ‘But we only published what the reader sent us’.”

Some might read this to mean that it is safer to run all comments in an unmoderated form – just like a Facebook ‘fan’ page is structured – then take them down if you get a complaint.

Such an approach might sit okay with these decisions in consumer or racial discrimination law, but what happens when the time bomb lands – a shocking defamation imputation, a heinous allegation damaging a forthcoming trial, or the breach of a court order or publication restriction like the naming of a rape victim?

Defamation and contempt are matters of ‘strict liability’, where you might be liable even if you are ignorant of the defamatory or contemptuous content you are publishing. The only intent required is that you intended to publish your publication or were ‘reckless’ in the publishing of the material. And neither has offered protection for publishers providing a forum for the comments of others.

Which brings us back to the question at the very start. If the Federal Court has ruled you should remove unmoderated material breaching consumer or race law within a reasonable time of becoming aware of it, what will courts deem a ‘reasonable time’ for a serious allegation of child molestation about a prominent citizen to remain on a publisher’s Facebook fan page?

If the allegation were about me, I certainly wouldn’t want it remaining there over a weekend. Or even five minutes. Any period of time would be unreasonable for such a dreadful slur.

The High Court established 10 years ago in the Gutnick case that a publisher is responsible for defamation wherever their material is downloaded. As The Age revealed in 2010, a blogger using the pseudonym ‘witch’ launched a series of attacks on a stockmarket forum about technology security company Datamotion Asia Pacific Ltd and its Perth-based chairman and managing director, Ronald Moir. A court ordered the forum host HotCopper to hand over the blogger’s details which could only be traced to an interstate escort service. But private investigation by the plaintiff’s law firm eventually found the true author of the postings on the other side of the nation who was then hit with a $30,000 defamation settlement.

And what if it is a litany of allegations about the accused in an upcoming criminal trial? I have blogged previously about the awkward position the Queensland Police face with their very successful Facebook fan page when citizens comment prejudicially about the arrest of an accused in a criminal case. No matter how well those fan page comments are moderated by police media personnel, they could never keep pace with the prejudicial avalanche of material posted on the arrest of a suspect in a high profile paedophilia case.

That leads to the awkward situation of the key prosecutor of a crime hosting – albeit temporarily – sub judice material on their own site. It can’t be long before defence lawyers use this as a reason to quash a conviction.

The situation is different in many other countries – particularly in the US where s. 230 of the Communication Decency Act gives full protection to ‘interactive computer services’, even protecting blog hosts from liability for comments by users.

Much has changed in the three decades since I had my first letter to the editor published by the Sydney Morning Herald as an 18-year-old student.

I can clearly recall that newspaper’s letters editor phoning me in my suburban Sydney home to check that I really was the author of the letter and that I agreed with his minor edits.  No doubt he then initialled the relevant columns in the official letters log – the standard practice that continues in some newspaper newsrooms today.

But all that caution has been abandoned in the race for relevance in the digital and Web 2.0 eras.

First, it was news organisations’ websites allowing live comments from readers – still largely moderated. For a while, most insisted on identification details from their correspondents.

Next came their publication in hard copy of SMS messages received in response to their stories. My local newspaper – the Gold Coast Bulletin – sometimes publishes several pages of such short texts from readers using witty pseudonyms.

And now we have the Facebook fan pages, where the technology does not allow the pre-moderation of the comments of others. You need to have that facility switched completely ‘on’ or ‘off’ – and it defeats the purpose of engaging with readers for a media organisation to turn off the debate. I can post a Facebook comment from an Internet café under the name ‘Poison Pen’ and it may well be vetted by nobody.

The whole issue is symptomatic of the social media challenges facing both the traditional media and the courts.

Meanwhile, expect to wait a while to see your comments to this blog published. I’ve elected for full moderation of all comments, and have already rejected a couple that seem to leave me exposed as publisher. You can’t be too cautious now, can you?

© Mark Pearson 2012

Disclaimer: While I write about media law and ethics, nothing here should be construed as legal advice. I am an academic, not a lawyer. My only advice is that you consult a lawyer before taking any legal risks.

4 Comments

Filed under Uncategorized

Anti-social racism in social media is unwise and illegal

By MARK PEARSON

Two recent cases stand out as examples where racist commentary has landed online writers in legal trouble.

The first was in the UK where a student was jailed for 56 days for Tweeting offensive remarks about a stricken footballer.

Another was in Australia where a Federal Court judge fined the News Limited website PerthNow $12,000 over comments posted by readers to its website featuring racial abuse of four indigenous teenagers who died in a stolen car. It reinforces the Australian law that you are legally responsible for the moderated comments of others on your social media or web sites.

I take up the issue of discriminatory abuse in my new book  – Blogging and Tweeting Without Getting Sued: A global guide to the law for anyone writing online.

The chapter is titled ‘The fine line between opinion and bigotry’. Here’s a short excerpt:

—————-

The fine line between opinion and bigotry

Sadly, human beings have found the negative energy to hate each other since time immemorial. Hatred of one form or another explains most of the wars and acts of violence throughout history. While the Internet and social media has allowed us to communicate with countless new friends and form all kinds of new professional and personal relationships, we do not just attract the attention of the ‘like-minded’.

There is a war going on in our pockets and handbags in each and every smartphone and on every home computer connected to the Internet. There are people so possessed with hatred and revenge that they are conducting a cyberwar on the objects of their disdain.

No matter who you are and where you live, there are others who might not know you personally but hate you for the category of human being you are: black, white, Asian, Hispanic, male, female, gay, straight, conservative, liberal, environmentalist, climate change denier, Muslim, Jew, Christian, obese, American, British, Pakistani, teenager, rich, poor, lawyer, politician or used car salesman. (Lucky there’s not a ‘hate’ button on Facebook, hey?)

Sometimes even some fun turns sour. A satirical swipe at redheads on the Simpsons television series prompted a 14-year-old Canadian boy to set up a Facebook ‘Kick a Ginger’ campaign in 2008, rapidly ‘friended’ by more than 5000 fans. As the Telegraph reported, dozens of children posted comments on the page claiming to have attacked redheads, with a 13-year-old girl from Alberta and her sister among the victims of the schoolyard bullies.

Such people judge you based on the labels they apply to you rather than who you really are or your life experiences that inform your views and values. And they are online and angry.

If you also have strong opinions and express them without fear or favour, your challenge is to avoid becoming one of them. Because if you do, the force of the law in most places can be brought down upon you.

Some individuals just cannot back away from a fight in real life or cyberspace. They become so obsessed with their causes or grudges that they launch poisonous online assaults on others that can leave their targets as traumatised as they would have been if they had been assaulted physically. Tragically, some victims have become so despairing and fearful that they have been driven to take their own lives.

In the eyes of the law, such attacks go under a range of names according to their type, scale, and jurisdiction. They include: cyberbullying, cyberstalking, online trolling, malicious online content, using carriage services to menace, harassment, hate speech, vilification, discrimination and even assault. Some are criminal offences where offenders can be fined or jailed and others are civil wrongs where courts can award damages to victims. Some are litigated under actions we have already considered such as defamation, privacy and breach of confidentiality.

Some are difficult to explain because the motivations are beyond the imagination of ordinary citizens. Australian ‘troll’ Bradley Paul Hampson served 220 days in jail in 2011 for plastering obscene images and comments on Facebook tribute pages dedicated to the memory of two children who had died in tragic circumstances. He had entered the sites to depict one victim with a penis drawn near their mouth and offensive comments including “Woot I’m Dead” and “Had It Coming”.

At about the same time the US Appeals Court in Virginia was dealing with a suit by former high school senior Kara Kowalski who had been suspended for five days for creating a MySpace page called ‘S.A.S.H’. She claimed it stood for ‘Students Against Sluts Herpes’, but the court found it really aimed to ridicule a fellow student named Shay. She had also incurred a social suspension for 90 days, preventing her from cheerleading and from crowning her successor in the school’s ‘Queen of Charm’ review. Kowalski felt aggrieved at the suspension because she claimed it had violated her constitutional speech and due process rights as it had not happened during a school activity but was really ‘private, out of school speech’. But the court disagreed.

“Kowalski’s role in the ‘S.A.S.H.’ webpage, which was used to ridicule and demean a fellow student, was particularly mean-spirited and hateful,” judge Niemeyer wrote. “The webpage called on classmates, in a pack, to target Shay N., knowing that it would be hurtful and damaging to her ability to sit with other students in class at Musselman High School and have a suitable learning experience.” The court agreed with the school and the trial judge that ‘such harassment and bullying is inappropriate and hurtful’ and denied her damages claim. A ‘Queen of Charm’ indeed!

Blogging and Tweeting Without Getting Sued: A global guide to the law for anyone writing online is now available in print format in Australia and New Zealand (US release in October) and as an ebook elsewhere via Kindle, Google, Kobo and some other providers. [Order details here.]

[Media: Please contact Allen & Unwin direct for any requests for advance copies for review. Contact publicity@allenandunwin.com or call +61 2 8425 0146]

© Mark Pearson 2012

Disclaimer: While I write about media law and ethics, nothing here should be construed as legal advice. I am an academic, not a lawyer. My only advice is that you consult a lawyer before taking any legal risks.

4 Comments

Filed under Uncategorized