Dealing with highly contentious discussions

There’s excellent moderation advice in this article, which is ostensibly about moderating challenging offline political discussions …

… but I think this advice can also apply to online discussions:

  1. A topic will be set for each meeting. It is permissible to post well-researched and sourced articles about the topic before and after the meeting only if you will be attending the meeting in person. The goal of this group is to take the conversations offline and to be neighborly. Therefore, articles should just be mostly suggested reading material. If comments get out of hand I will remove the post.

  2. We should avoid labels as much as possible such as Democrats think this and Republicans do that. We can talk about party platforms but not assume that a neighbor’s ideas and opinions can be defined or assumed solely by the political party or religion they align with.

  3. Participants goals should NOT be to try to influence or change their neighbor’s mind. The goal is to listen to understand and to speak to be understood. If we walk away from each meeting understanding the perspective of someone who does not share our ideas and opinions, that is a win!

  4. At the meeting, a person is not allowed to dominate the conversation. If you need to make a point that is going to take some time to explain, ask for 5 min. You will have 5 minutes to make your point with no interruptions. No one should speak for more than 5 minutes at a time.

  5. The focus will be on issues and policy — not on character traits of a politician. We may bring up how a politician spoke about or voted on an issue or policy, but not on anything unrelated to the issue or policy.

Some of this echoes what I have been thinking when looking at the most challenging, tumultuous discussions in Discourse. I want to elaborate on a few things in a reply and get your thoughts on this too.


The experiment described in that medium post is quite interesting and well worth the try. I’m very curious how it works out.

I’d say the moderation advice provided is a good starter but managing the actual situation in situ is another thing. Even professional facilitators can reach their limits as this little documentary illustrates:

I’d say among those five base rules, the most important one is this one:

In other words: make empathy the goal. And I’d say that the face-to-face situation tends to support that goal in a natural way because of the of the multimodality (richness) of the communication and the availability of multiple synchronous feedback channels. So my question would be: how do you do that online?


We picked the wrong company name… it should have been The Civilized Empathy Construction Kit Inc…


This one is interesting because it’s the most “offline” of the advice. No referring web links as citations? But that’s what you do in web discussions! Reframing this as tell us your personal story or tell the story in your own words is an interesting direction, and actively steers people away from “generic talking points” they would get from a commonly cited article or source. That could be helpful advice… if perhaps hard to avoid (and do you want to avoid citations?) for online discussions.

Excellent advice both online and offline. Labels don’t help. In fact, I think the presence of common pejorative labels like “SJW” can be fodder for auto-flagging as the discussion has already started to degrade. It’s one of the reasons I want to get to that more flexible blacklisting system, where the presence of certain words can trigger a variety of actions, maybe even a webhook, for Discourse 1.9 @neil.

Focus on actions, what people do, rather than what they look like, remains excellent advice across the board.

I have been thinking this for a while. Nobody’s ever going to change their mind immediately on anything. Tell me, when was the last time you actually changed your mind on something you felt strongly about? What was that process like, how did it work? Really think through how it happened, and why.

For me, changing my mind is a process of hearing a lot of perspectives and data over time, and a slow shift to a tipping point. It’s never, ever been a case of “I read X and immediately realized how wrong I was!”

So perhaps this advice is quite apt: forget changing people’s minds, just tell your story. A great, heartfelt, honest story can work towards changing someone’s mind in much more powerful ways than … a literal attempt to get them to change their mind.

We have an education panel for this in Discourse, if you haven’t seen it yet:

This definitely covers the common “replying too much and dominating the conversation” pattern.

Unfortunately, there is a slightly less common way to dominate the conversation that we don’t yet account for: disproportionately long replies. What I mean is that for every reply, the other person writes thousands of words in response, consistently, time over time. There’s something … wrong with this pattern, and you can see it in this discussion I had on my blog.

It is a kind of filibustering, where every reply is met with a torrent of words, nothing terrible about any of those words individually, but considered as a whole they are not-so-accidentally smothering you with … verbosity. Crowding you out, not with logic or reason or even a good story, but with sheer volume. Eventually they wear you out (and down), because they’re willing to invest so much effort in these disproportionate replies.

But, unless they’re copying and pasting, it’s certainly work that requires an ample amount of free time.

I view this as mostly the same as point #2; labels are shortcuts to assigning generic “character traits” to people, when they are wildly diverse. As before, focus on actions, what people actually do.


This is a huge challenge in face-to-face meetings because you have to cut people off. Filibustering is also more severe in face-to-face because it is directly stealing other people’s airtime.

This is not the case online. Long replies don’t prevent others from replying and nobody is forced to even read long replies (though it might be considered more polite). But long replies can certainly be annoying. So here is where the challenge but also the opportunity lies in online discussions: teach and encourage people to be brief, to put themselves in the position of the casual reader.

Apart from another education message in the editor, this could be done by making the “Hide details” feature more prominent and teach people how and why to use it (e.g. in the bot’s advanced user tutorial)…

1 Like

You reminded me that Disqus will suppress very long posts by default, which looks like this, with a “see more” expansion at the bottom of the post.

I checked around on a bunch of Disqus enabled sites and the posts have to be fairly long to trigger this length suppression, though. It also doesn’t come up very often in the “average” web page comment scenario from what I can see. For example here’s a contentious discussion about hate speech on Disqus blog itself with 267 comments total and I see … 4 very long post suppressions? 4 out of 267 is 1.5%.

One plausible solution is to suppress for length, in exactly this click-to-see-the-rest manner, posts by new users, or users below a certain trust level. We certainly would not want to do this globally since Discourse is generally about encouraging longer discussions, not suppressing them!

(There are of course max length limits on posts in Discourse but they are pretty generous!)

They push everything else down the page, and having to read through thousands of words of… stuff, even if you are just scrolling, has a real mental cost. That said, it’s absolutely effort to create multiple long replies, and a lot of effort at that. So it isn’t nearly as common as “lots of replies”.

I still want to come back to this @sam and @eviltrout for fast moving discussions, limit how often the most talkative people are posting, so everyone has a chance – and potentially also limit brand new users from posting as well under the same conditions.


It would be great if this could tie into suspect/fingerprinted users to prevent sockpuppetry.

Note that the Discourse “slow mode” feature, although manually invoked, does address this specific part. I’m still noodling on ways that slow mode could become automatically triggered in certain scenarios, for the safety of the community.