Cyberspace: a forum for some bigotry

IT'S funny how progress cuts both ways - easing our toil but also rattling our comfort zones and throwing new problems at us.

IT'S funny how progress cuts both ways - easing our toil but also rattling our comfort zones and throwing new problems at us.

The Internet, for example, has revolutionised communications and democratised media ownership. It has empowered the public to determine what news is, when and how it will be produced and consumed and by whom.

Discerning readers, viewers and listeners want to air their views, for free and unfettered by editors' whims. Media owners and editors have been left figuring out how to keep online conversation free-flowing and civil.

It's scary what sometimes passes as debate or "engagement" online. There's racism, sexism, homophobia, xenophobia and other vulgarities.

A reader, Bongani, has complained that Times Live provides "a breeding ground for hate speech and (could) potentially incite violence against South Africans".

He bemoans the scant regard for the guidelines promoting civility, saying that "the editorial team, moderators and Avusa are not doing anything about it".

Bongani would like to see decisive steps taken to moderate comments and that "authentic people give information about themselves ... so that discipline can be maintained".

How can we let adults say what they want to say to other adults online with due regard to the law and the dictates of politeness and common decency, as we do in the letters pages of newspapers?

Paddi Clay, managing editor of Times Live, points out that in the Web 2.0 era mainstream news media no longer act as gatekeepers. They now facilitate conversations among disparate groups.

But the media can't provide a platform for amplifying vitriol and claim innocence in the resulting harm. The press can be sued for the postings of twisted minds on their online sites so it is forced to moderate comments.

Should such moderation happen before or post fact?

Avusa has chosen to go the latter route, letting people vent their spleen and acting only when there is clear abuse.

Alan Williams, The Herald's editorial systems manager, calls it peer moderation. This entails registered users reporting abuse by clicking on the icon provided for the purpose.

Says Williams: "If someone sees an inappropriate comment they complain and the comment gets it removed. The system alerts us to the reported abuse and we check if there is indeed abuse. If there is we delete the comment completely and ban persistent abusers forever."

Pre-moderation, including choosing which stories can be commented on, smacks of censorship. It also requires employing dedicated staff, which is unviable.

Andrew Trench, editor of Daily Dispatch, says they do not pre-moderate "simply because we don't have the resources and I believe that community participation in a discussion should be able to self-moderate".

Emma Sadleir, associate at lawyers Webber Wentzel, says media companies are obliged to monitor what is said on their sites. The terms and conditions for commenting online must clearly state that abuse is prohibited and that such comments will be removed.

So newspapers will have to take precautionary steps such as installing software that filters swear words and removes defamatory comments once alerted to them.

Says Sadleir: "The principle of English law that distributors may escape liability on the grounds of absence of negligence has been recognised."

It is easier to sue the media than the abusers, who often cannot be traced because they hide behind false identities.

This makes Bongani's point about verifying users' identities pertinent and again raises the question of affordability.

The problems will persist for as long as society remains imperfect. Until then, the imperfect solution lies in vigilance in reporting online abuse.