Tag: Commenting

If you’re responsible for moderating internet comments, you know they can be pretty toxic. Like, “humanity is doomed” toxic. Given the racism, misogyny, harassment, and abuse in many online comment sections, and the costs of traditional moderation, it’s no surprise so many sites consider turning them off.

Content moderation is a difficult job, as covered extensively in WIRED and The Guardian. I spent four years managing the online community for TED, which meant I ended up reading a whole lot of comments. But the more I got to know the commenters, the more I started to empathize with them, even the most difficult trolls. The people behaving poorly were, for the most part, perfectly reasonable human beings. But because of the way comment sections are designed, a conversation that would be civil in a face-to-face interaction can easily become a toxic mess of abuse and harassment online.

We all have good days and bad days. Very few people are 100% good or 100% bad; much of how we behave depends on context. Most public spaces need at least basic structure to keep us treating each other with respect, and we have those structures in our real-world societies. But so far those structures haven’t followed us online. Instead, publishers and site owners are left to reactively moderate whatever it is that people submit in the comments, around the clock.

There are many approaches to ease the burden of moderation; here are a few of them.

Language Processing

From the use of banned-word lists to sophisticated machine learning like Google Jigsaw, countless efforts have been made to use machines to moderate user-generated content like comment sections. However, language is complicated and algorithms can’t pick up on subtleties such as idiom and context.

Pre-Moderation

Full-time staff at the New York Times read and approve each comment before it’s posted to the site. For those that can afford it, this yields excellent results, though it limits the total amount of comments a site can host. Smaller organizations with lower website traffic may find that this approach suffices, if they get only a few comments per post.

Author Participation

A recent study by the Engaging News Project found that civility increased when the author of the article directly participated in the comment section. By putting a face (even a virtual one) to the comments and showing that there is a real person behind the post, authors are often able to reduce rude comments or harassment.

Peer Moderation

At Civil, we’ve taken a new approach: we’ve designed a comment platform from the ground up to mimic the natural social behaviors and consequences you see in face-to-face communities. In order to submit a comment, each person must rate three comments from other people, before their own is in turn reviewed. They don’t get to choose which comments they rate, and backend algorithms look for patterns of abuse and keep the voting fair.

People are willing and able to help. And a surprising effect of the peer moderation approach is that we’re seeing people stop and reflect on what they’re writing in a way they never seem to do with unmoderated comment platforms. Knowing that their anonymous peers will help decide if their comment should be published seems to make people pause and think about what they’re writing.

 

Internet civility is a challenge, to be sure, but as more and more of our social interactions move online, it’s crucial that we get this right.

 

Photo credit: Michael Matti