Recommendations for handling difficult situations

I’m in the process of coming up with some basic recommendations for Discourse owners and moderators, on how to make challenging moderator decisions in a sustainable, community friendly way.

Here’s what I have so far:

  1. Share moderation duties with multiple people. Having a single focal point for all mod decisions is fatiguing and can lead to undue flashpoints, and personal attacks. If there’s only one person in charge, is the forum “you”? What if you’re tired, or cranky, or in a bad place? What if you find yourself overly involved in a topic and want to defer moderation to someone else? That’s a dangerous path to be on.

  2. When you must ban, be decisive. Certainly give people another chance if they make a mistake, but then cleanly, decisively ban them. It is not your job to reform this person. You are not obligated to let them try over and over. You can keep the door open by advising them to email you in 6 months or a year if they want to try again. But the impetus to recognize the problem, apologize, and ask politely to try again – that is on them.

  3. Discuss bans, but not too much, and only in the right place. Please don’t celebrate bans or constantly bring them up, and definitely don’t clutter unrelated topics with ban conversation. Keep ban discussion in the “forum feedback” category.

  4. Keep heated moderation talk in private. If things are getting heated, remove that part from the public topic, into a private group message. Maybe even take it offline if needed, to email or voice. At minimum move it to the “forum feedback” category so other topics aren’t cluttered with off-topic moderation stuff.

  5. Take a break. Sometimes people just need to walk away for a while and take a break from the discussion. A short timed closure of a contentious topic, or a day suspension, might be in order. Avoid making hasty decisions in the heat of the moment, and discourage others from doing so as well.

35 Likes

On the forum that I manage, we’ve found that being clear about the reasons why we do the things that we do makes the community more cooperative. If a moderator is perceived to be arbitrary, it causes problems. Of course that means we’re constantly iterating our community guidelines. But it’s not so often that we’re getting bogged down by rules.

When we have to impose sanctions, our default mantra is to set blasters to stun. Serious action like thread closure (our people don’t like to be interrupted) or suspension requires more than one pair of eyes to go through.

11 Likes

That. 110%. Over at Stonehearth any “major” action that is taken is discussed by at least 2 staff and typically ends up being discussed by all active staff before taking place. We’ve taken advantage of the updated blocking feature, to prevent public posting while we chat quietly with a user. If necessary - and thus far very rare - we’ll go for a suspension. If the offense is clear, non-disputable, we’ll block without discussion, but that too is quite rare. The only instances where a single moderator would block without discussion would be an inappropriate username and/or profile image. Even when we had a clear spammer posting pornographic videos, (which were quickly flagged into oblivion), we didn’t delete the spammer until after a (brief) discussion.

Another focus for us is leading by example. Our staff is active in the community, and many users don’t immediately associate us as staff, just as a friendly user. We try to welcome each new user after their first post with a “hello and welcome” reply and be as friendly as we can. If a user posts in the wrong place, we’ll move it, and then post a reply explaining why. 95% of the time, users are appreciative, and this helps foster a friendly and helpful community. With this, most users will seek us out when they need help, and we aren’t some unknown being watching over the forums.

9 Likes

As I reflect on this topic, I think the advice here is still sound. However there is one thing I’d possibly add:

  1. Consider having a site feedback topic summarizing significant moderator decisions and the rationale behind those decisions. We find that transparency in moderation goes a long way toward making communities more sustainable. Over time, you’ll end up with something of an illustrated rulebook… specific examples of the types of behaviors that are not welcome on your site, and the consequences of those behaviors. Related discussions can be spun out into other site feedback topics to improve and enhance your community as well. That’s the goal!
21 Likes

I also wanted to capture this thought here, because there’s something uniquely dangerous about it, and it’s definitely on the extreme end of the moderator challenge level.


I found this exchange on Hacker News fascinating.

I think the relevant guideline puts it well:

Comments should get more civil and substantive, not less, as a topic gets more divisive.

The inadequacy of this guideline and couching most moderation along its lines is why the problem and ‘dynamics’ as tptacek puts it, exist in the first place.

The site selects for and breeds civil, substantive racists and misogynists (along with the hyper-sensitized responses) like a hospital breeds antibiotic-resistant superbugs.

I can see selects for, but breeds seems a stretch. Unless you mean breeds civility within racists and misogynists, which seems beneficial?

Yes, mostly the second thing. It’s the opposite of beneficial - because the guidelines say ‘don’t be a meanie/obvious blowhard’ and most people who get called out for anything are called out for something along those lines, bigots who adapt to these can and sometimes do last on the site for years.

HN’s mods put in a great deal of effort in and are surprisingly successful at containing the far more basic and common human impulse to be a jerk to strangers online. They have rules, they enforce them, they publicly shame rulebreakers, etc. You are explicitly not allowed to be an asshat on HN and everyone knows it. The place would be better if ‘don’t be a bigot’ got the same treatment. All caps users and transgressors against HN’s fundamentalist quotation marks cult are exposed to more public opprobrium than your typical “human biodiversity” sea lion.

I had never thought about it this way, but he’s right – a racist, misogynist, or bigoted line of argument is far more dangerous when it is draped in the robes of overt civility. So to the extent that we are teaching people …

Hey, it’s OK to say racist / sexist / bigoted things, as long as you say them in a civil, substantive manner

… we are indirectly creating much more powerful racists / sexists / bigots, who will become immune to less sophisticated moderators who will only see “well, what this person is saying is kind of morally abhorrent, but they aren’t saying it in a mean way…”

13 Likes

Studies where they gives classes on empathy and “healthy relating” and non-violent communication to people out on the psychopathology spectrum show that they become more effective… psychopaths.

It’s indeed a challenging aspect of being facilitators and moderators in community.

12 Likes

It is worth considering a group-led category for moderation. This way people can opt-in to discussions while the vast majority of users can opt-out.

We created a logged-in users category so it is easy for discussions to move to Trust Level 0 and avoid web crawlers & other anonymous traffic.

4 Likes

Some compelling data here supporting the idea of kicking out your most difficult users. These kinds of users are thankfully very rare, so the labor of kicking them out is worth it:

And when moderating content, the importance of leaving signposts about why the bad posts were removed, rather than just making stuff go poof:

The two papers are “Community Interaction and Conflict on the Web” (pdf link) and “Does Transparency in Moderation Really Matter? User Behavior After Content Removal Explanations on Reddit” (pdf link).

16 Likes

This topic is awesome. I’m definitely going to share it with my community manager. :grinning:

Thanks a bunch for creating it!

7 Likes

A detailed description of good faith vs. bad faith posting from Boing Boing, who has the longest running public Discourse instance in the world… if anyone would know, they would!

“Bad-faith” from a moderation context is:

  1. Violating our community guidelines. That’s an easy and obvious one.
  2. Posting something with the specific intent of riling up the community instead of participating. This necessarily requires some way to tell that the user likely does not believe what they are saying.
  3. A consistent pattern of taking the “devil’s advocate” position on multiple topics. We’ve had several bad-actors do this specifically - they always took the contrarian point specifically to rile up the community.
  4. Repeated feigning personal involvement with topics - that improbably, a user is part of every minority, that they’ve done every thing, that every issue is personal for them. We’ve had bad actors do this specifically, as well.
  5. Traditional sealioning - coming into a discussion, demanding that it go only one way, and that people expressly refute their position, dismissing all others.

The mods point out that all the above negative behaviors should be responded to via flag :triangular_flag_on_post: and not the reply button! Replies are encouragement, that’s you saying “yes please, let’s have more of this”.

17 Likes

This article is mostly talking about world-eating “everyone on one website” :earth_africa: platforms, which Discourse isn’t really designed to be… but it does a great job of illustrating the general perils of content moderation starting (incorrectly, of course) with “all speech is allowed!”

9 Likes

Wow! An excellent and enlightening article on social media and its many problems. Thank you. :+1:

5 Likes