When a large company rolls out social media capabilities on their website, they frequently worry about negative posts. A large company represents to many, a large target. Even the best companies with excellent customer service will have the occasional frustrated, angry customer.
How to deal with this? If the company moderates posts, and filters out anything negative, they will lose trust with their customers. If the company does nothing, and lets negative posts pile up, they could dominate the online community and discussions even if they are not truly representative of most people’s experiences.
It’s not so much whether a post is negative or postive, but whether it is constructive or destructive. A constructive post might point out a flaw in a product, but lead to a discussion about how to improve the product or work around the flaw. A destructive post might point out a flaw in a product and then proceed to personally attack the employees of the company.
Luckily, communities can police themselves. Using comment moderation features, users of online communities and websites can rate comments up or down, or report them for violation of community guidelines (such as inappropriate language).
When the right tools are in place, the constructive members of the community (who generally represent the majority) will tend to vote down destructive contributions. The company who sponsors the community or social media aspects of the site won’t need to be involved in moderation, and so they won’t be perceived as trying to control the conversation.
Pete Hwang, Experience Designer-Strategist at Hewlett-Packard, recently brought to my attention an
excellent Wired magazine article by Clive Thompson on “how to open your website to comments without inviting the flood of toxic and inappropriate comments & flamewars that often arise”.
The world’s top discussion moderators have developed successful tools for keeping online miscreants from disrupting conversation. All are rooted in one psychological insight: If you simply ban trolls—kicking them off your board—you nurture their curdled sense of being an oppressed truth-speaker. Instead, the moderators rely on making the comments less prominent.
Pete’s favorite approach to disarming those destructive comments:
Here’s another hack: selective invisibility. It was invented byDisqus, a company whose discussion software handles the threads at 90,000 blogs worldwide (including mine). In this paradigm, if a comment gets a lot of negative ratings, it goes invisible. No one can see it—except, crucially, the person who posted it. “So the troll just thinks that everyone has learned to ignore him, and he gets discouraged and goes away,” chuckles Disqus cofounder Daniel Ha
(It turns out that selective invisibility is a technique that actually dates to Bulletin Board Systems (BBBs) back in the early 1980s.)
It’s flattering to be referenced here, Will. I always think of you as the guru on these topics and myself as the pupil.
BTW, know if there’s any readily available code out there to easily implement “selective invisibility”?