Shadowbans are bad for discourse, and here's why

I’ve read some encouraging posts here on the topic of moderation, such as @codinghorror’s here1, and I wanted to throw my hat in the ring.

Recent disclosures by Twitter1 have reinvigorated debate over the use of shadowban-like moderation tools. Are they good or bad for discourse? With this post I hope to engender some debate among mods and users about the use of shadowbans.

My hypothesis is that transparent moderation fosters a prepared userbase where mods and users trust each other, and shadow moderation creates more trouble than it resolves. I therefore argue that things like the shadowban should never* be used, even for “small” forums like Discourse installations.

About me

I’m the author of Reveddit, a site that shows Redditors their secretly removed content. Reveddit launched in 2018 and since that time I’ve been outspoken on Reddit1, Hacker News1 and more recently Twitter1 about my opposition to the widespread use of shadowban-like tools, which I call Shadow Moderation. In case there is any doubt that some form of shadow moderation is happening on all of the most popular platforms, please also see my talk for Truth and Trust Online 20221, slides 8-9 at 5:50, or my appearance on Using the Whole Whale1. If there is any doubt that the practice is unpopular among users of all walks, please see the reactions of thousands of users in Reveddit’s FAQ1.

Preface

For reference, here are some existing definitions of shadowban-like behavior:

  • “deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster”. - Twitter blog1
  • “A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost.” - Coding Horror1

And here are two definitions from me:

  • Transparent Moderation: When users are notified of actions taken against their content.

  • Shadow Moderation: Anything that conceals moderator actions from the author of the actioned content.

Note that when platforms talk about transparency, they simply mean that they clearly delineate their policies. This is not the same as transparent moderation.

And to be clear, I do support the use of moderation that is transparent. I do not support government intervention in social media to enforce transparency upon platforms, such as is being attempted by PATA1 in the US or the DSA1 in Europe (Platform Accountability and Transparency Act and Digital Services Act, respectively). Therefore, I also accept the legal right for platforms to engage in shadow moderation.

The case for transparent moderation

The case for transparent moderation is best defined as a case against shadow moderation.

Shadow moderation goes against habeas corpus, a thousand-year-old concept that the accused has a right to face their accuser. The values of habeas corpus are enshrined in the U.S. Constitution as an amendment (the 6th), but it is also a basic dignity that we expect for ourselves and therefore should give to others. Simply put, secretive punishments are unjust.

Now you might say, “but these are private platforms!” Indeed they are. Yet in private we still strive to uphold certain ideals, as Greg Lukianoff from FIRE explains here.

John Stuart Mill made the same case in On Liberty where he wrote, “Protection, therefore, against the tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling.” 1

Now you may say, “We need it to deal with bot spam!” This will only work for a short time. In the long run, bots will discover how to check the visibility of their content whereas users mostly will not. Therefore, transparent moderation will work just as well for bots, and shadow moderation only hurts genuine individuals.

Perhaps you will say, “I’m only using it on a small forum.” And there I would ask, is it really necessary to resort to deception? Getting comfortable with shadow moderation on a small Discourse installation may be a stepping stone towards supporting its use on larger platforms, and the harm there is clear to me. Every side of every issue in every geography on every platform is impacted. Over 50% of Redditors have some comment removed within their last month of usage. It interrupts untold numbers of conversations, perhaps due to a fear for what will happen if hate is “let loose.” But Reveddit has existed for four years, and that has not happened. Instead, what I’ve observed is healthier communities, and I link many examples of that in my talk.

Shadow moderation at scale also empowers your ideological opponent. Groups who you consider to be uncivil are more likely to make uncivil use of shadow moderation. Any benefits you have while in the empowered position to implement such tooling will erode over time. And as long as you make use of shadow moderation, you will not win arguments over its use elsewhere.

To my knowledge, little research has been done to study the effects of shadow moderation. I know of one study that indicates mod workload goes down as transparency goes up1. Yet we do know that open societies thrive as compared to those that are closed off.

Left unchecked, shadow moderation grows isolated tribal groups who are unprepared for real-world conversations where such protections are not available. That can lead to acts of violence or self harm because the harm inflicted by words is real when we fail to accept the limits of what we can control. We can instead choose to respond to hate with counter speech, and encourage userbases to do the same.

Finally, new legislation cannot solve this for us. Any such law would abridge our individual freedoms and have the unintended consequence of either (a) empowering the very entity the bill of rights was designed to protect against, or (b) not working, since there is no way to enforce transparency without giving the government access to all code. So you’re either back to square one or worse off than when you started. Support for habeas corpus and therefore civil discourse is best achieved through cultural support for actual transparent moderation. Anything short of this would be a bandaid covering up a festering wound.

Conclusion

In conclusion, one should not take the position, “he’s doing it, so I have to do it in order to compete on equal footing.” Consider instead that you may both have poor footing. Discourse installations and other forums who do not use shadow moderation have a stronger foundation. Their userbases are prepared, mods and users trust each other, and therefore both feel free to speak openly and honestly. And, given the widespread use of shadow moderation, it is now more important than ever to vocally support transparent moderation.

Appendix

One as-yet-unreported use of shadow moderation on Twitter may be when Tweets are automatically hidden based on mentions of certain links. I discovered this when I shared a link to my new blog for which I had bought a cheap $2/year domain with the TLD .win. After sharing it via Twitter I noticed my tweets were showing up for ME but NOT for others. For example, this tweet1 does not show the reply from me, even when you click the button “Show additional replies”. To see my reply, you have to access the link directly1. Effectively, that reply is not visible to anyone who does not already know me, and the system conceals that fact from me while I am logged in.

* The one circumstance under which I think widespread censorship is useful is when society has forgotten its harms. In that case, it inevitably returns, if only to remind us once again of its deleterious effects.

20 Likes

Very interesting, thank you.

You could easily argue that there is another layer too, search engines like Google!

6 Likes

Very interesting read. However, I wonder if there is even a use for shadowbans in Discourse.

Yes, there is a shadowban plugin but it’s hardly used. It has one (1) Github star and the statistics we have within Communiteq reveal that the number of Discourse forums that are actively using this is less than 0.5%.

So why is that adoption of shadowbanning within the Discourse platform so low? It’s not because shadowbans are bad (which they are), it has another reason. From a technical point of view, shadowbans are very easy to detect on Discourse. Simply visit the forum without logging in or using a different account and spot the differences. And that is possible because Discourse is not a social media platform that uses an algorithm which decides what you will see and in what order it is presented. In Discourse, the content you are shown does not depend on what you have done / seen / read or written before. And if there is no algorithm, there is no place to downplay certain content. There is only the option to hide it completely instead of showing it.

I don’t support all forms of transparent moderation either, because transparent moderation can still be subjective.

Besides my work in the Discourse ecosystem I’m also founder of a company that combats online disinformation and hate speech. During talks, I am often asked for examples of foreign interference on the democratic processes in this country (the Netherlands), where the people asking the question always expect me to tell some story about Russian trolls. But in fact, the main examples of interference in the democratic processes by foreign actors we have seen in this country, are social media platforms like Twitter or YouTube banning or shadowbanning Dutch politicians, without any kind of review by a court or other independent body.

And that’s bad.

13 Likes

Hi Richard,

Thank you for weighing in. I’m not entirely convinced that usage is small on Discourse. We have no way to verify installations’ customized code etc. without first building and sharing tools for users to review those systems’ behavior themselves. Even if usage were small, it’s still important to take a public stance because usage elsewhere started small and is now widespread. I’ll have to disagree with your comment here,

So why is that adoption of shadowbanning within the Discourse platform so low? It’s not because shadowbans are bad (which they are), it has another reason. From a technical point of view, shadowbans are very easy to detect on Discourse. Simply visit the forum without logging in or using a different account and spot the differences.

While that is technically possible, most users won’t think to do it. After all, Reddit’s comment removals work the same way, which you can see by commenting in r/CantSayAnything. 99% of Redditors have no idea it works like this. I had an account on Reddit for seven years before I discovered its widespread use of shadow moderation. I had also visited as a reader for probably three years prior to that. And even now that Reveddit has been out for four years, I would estimate that less than 1% of Redditors understand how comment removals work. Finally, even after building Reveddit it wasn’t until four years later, this summer, that I really understood that shadow moderation tools existed and were widely used on all major platforms.

I agree with your point that manipulation of algorithmic feeds are harder to detect. And as you say, non-algorithmically filtered content is reviewable. However, date-ordered comment sections always fit this category, and I believe every major social media site has them. Those are therefore reviewable, provided someone builds tools to do that, because again users don’t think to check this themselves. Even if you do check a comment’s status once, it’s possible for a moderator to quietly remove your comment an hour later, thereby invalidating your check! This is why I built Reveddit Real-Time1, an extension that polls the system for shadow removals and notifies you when that happens.

Such tools should be built for all platforms until there is sufficient trust between users and social media sites that shadow moderation is in the rearview mirror. Shadow moderation is so widespread that I think everyone should assume it’s being used by whatever system they’re using to communicate, unless the system expressly denies the practice. And even in that case you need to watch their wording, such as Twitter’s choice to publicly use the word “rank”1 rather than internal descriptors like “de-amplify”1. YouTube, on the other hand, more openly acknowledges it will “reduce”1 content.

transparent moderation can still be subjective.

Can you expand on this?

Besides my work in the Discourse ecosystem I’m also founder of a company that combats online disinformation and hate speech.

Interesting, can you say which one? And are the talks you mention available online? I agree that YouTube’s bans are also concerning. I suspect their moat, like others’, was built using shadow moderation tools that “reduce” or “shadow remove”. We should therefore focus on that first.

I think the real censorship of our era is shadow moderation. Transparent moderation can be censorship if it is not “viewpoint neutral”, but it is more often akin to editing. A final case against real censorship comes from Bob Corn-Revere, a leading first amendment lawyer: “the censor never has the moral high ground.”1

3 Likes

I agree that moderation should be transparent, but I can see how shadow banning would be tempting for dealing with trolls who won’t go away. What are some alternative ways of dealing with someone who keeps creating new accounts after having been banned by a site’s staff?

8 Likes

As one of the founders of the the second largest Discourse managed hosting provider, I have that way - at least with respect to the instances we host - so I am in the lucky position to be able to perform some statistically significant calculations here. And if there was a widespread shadowban functionality that - in some magical way - would completely be disjunctive from “our” instances, I would have not even heard about it in the past 9 years. That’s, well, highly unlikely.

You post seems to fade the distinction between shadowbanning (where all posts from certain users are suppressed) and shadow moderation (concerning individual posts), but I think that distinction is important.

Apart from the technical differences where shadowbanning is much easier to detect (if you can find a single post from a certain user, they are not shadowbanned), shadowbanning is much more evil, since someones freedom of speech is being restricted based on the things they have said before, where shadow moderation suppresses their current speech.

I think the main issue with shadow moderation is that it is subjective, not that it is silent. When people are notified about content removal, this can also lead to unwanted behavior (for instance reposting or trolling). An example is what happened on LinkedIn during Covid-19. They notify about content deletion most of the times so people started taking photo’s of their removed posts and reposting them. This didn’t only draw more attention to their posts, (which stayed under the radar because LinkedIn apparently can’t do image-to-text on those photos) it also fueled the “great reset” conspiracy theories that there was a larger power that was restricting their freedom of speech.

Well, this:

5 Likes

Hi Simon, context is very important. Do you have any specific examples? In general I would suggest imagining that this burden need not fall entirely on content managers’ shoulders. Engage your community and explain the dilemma. Maybe it can be resolved without a ban. If you think that’s naive then I would say more context is needed to discuss.

2 Likes

Richard, thank you for these clarifications. The data point you provide is good to know. I think we can agree that shadow moderation does not appear to be a big deal on Discourse today.

I personally will still not declare any “all clear” here, for reasons already stated and because even a single forums’s usage of it can have a big impact. For example, certain groups on Reddit make more heavy use of shadow moderation than others. Users can be lulled into trusting the system as a whole on the basis that certain groups do notify them of removals. So they think it happens all the time. And what often happens next is the more manipulative groups are trusted more simply because they don’t tell users when they remove content. Over time, this has led to more groups defaulting to removal without notification. Reveddit helps push back on this somewhat, but it is still just a bandaid without broader awareness.

It’s not hard for me to imagine a Discourse installation that does not exercise transparent moderation and thus takes advantage of users in the same way. I agree that widespread use of shadow moderation elsewhere does not mean its use is widespread on Discourse. I consider the published plugin and requests for bug fixes to be evidence that shadow moderation is at least knocking on Discourse’s door.

Apart from the technical differences where shadowbanning is much easier to detect (if you can find a single post from a certain user, they are not shadowbanned),

From a user’s perspective, this test does not work. Systems that selectively shadow moderate content do not advertise that they do so. Also, “easier” doesn’t mean “easy”. This user1 was shadowbanned sitewide from Reddit for three years before they discovered it, and many other users are regularly shadowbanned from subreddits, for example as documented in r/LibertarianUncensored1.

Reddit now also has something called Crowd Control that allows subreddit moderators to flip a switch to auto-remove all commentary from non-regular subscribers without notification. In a single r/conservative thread1, 4,000 comments, over 50%, were automatically shadow removed. None of these users have any idea that their participation in that group is often muted.

shadowbanning is much more evil, since someones freedom of speech is being restricted based on the things they have said before, where shadow moderation suppresses their current speech.

Shadowbanning [of a whole account’s content] is a form of shadow moderation. The term “selective invisibility”1 may better describe what you meant by “shadow moderation”. Both, in my opinion, can be equally harmful due to the lulling effect described above, and because selective invisibility can still be applied without human review of the moderated content. On Reddit, that constitutes most comment removals; mods do not go through these and approve them, and I imagine the same is true on other platforms.

Finally, I agree that people may debate what constitutes censorship, and even what it means to be completely transparent about moderation. I think the more salient point here is that we are so far from transparent moderation on social media, and so far from public awareness of the pervasiveness of shadow moderation, that any step in that direction is significant. In other words, don’t let perfection stand in the way of what’s good.

5 Likes

Hi there Rob, I agree with @simon that there are certain instances where shadowbanning could be useful on a purely functional basis. I personally don’t want to get into the deeper moral and philosophical issues of this topic, but I have seen multiple cases that are quite clear cut. It usually involves a specific person that the entire forum knows as a blatant troll, and the person has been banned on multiple occasions and keeps coming back under different usernames, and that sort of troublemaker invariably uses the TOR network. The same goes for disgruntled users that post furiously profane tirades and awful insults and even disturbing obscene images, or even specific targeted physical threats. So the sort of cases that I’m referring to are clear and unequivocal examples of users that absolutely should not be on a forum (or even free on the streets in some cases), and their online behavior violates the norms of practically all decent people in the world. Shadowbans can at least buy some time before that sort of user realizes it and creates another account.

6 Likes

That’s a tricky one. I’m not in a position to give examples, but the subheading on the Discourse Shadowban topic sums it up pretty well: “The last resort for dealing with trolls that just won’t go away.” Also this:

My approach to the situation would be to add a note to the user’s account with Discourse User Notes to give other mods on the site a warning about the user and make sure they are quickly banned for any new infractions. I’ve never come across a situation where the user didn’t eventually go away.

I think we should be doing our bit to try and make the world a better place. The type of issue I’m describing here is dealing with users who seem (from my point of view) to be dealing with mental health issues. I’d like to know how to help them be better citizens of both the internet and the real world.

6 Likes

You could easily argue that there is another layer too, search engines like Google!

Right. I’m more concerned about shadow removals in date-ordered comment sections. There, the harm is clear and it is relatively easy to build tools that show where it’s happening. The trick, I think, is growing support for that movement. It doesn’t exist yet. People don’t even know it’s happening. And, to the extent that people still say “we need such tools for this worst case scenario”, I think that is reductive because the harms of this new form of shadow censorship get sidelined or are never considered. Still, it may be a conversation that needs to happen if we want to raise support for having transparently moderated conversations.

You’re absolutely right to say that opaque algorithms can also shadow moderate. I’ve heard rumors of research that argues Google manipulates what comes up in its search suggestions as you type, however I don’t have a source on that so it remains rumor for me. One could also intentionally bias training data, or fail to notice existing bias in the data. Or, you could argue that what constitutes bias is subjective since the digital world is never going to precisely reflect the real world.

Those are all more involved discussions, many of which will only occur behind closed doors. I think they would be more likely to occur if there were broader awareness of the prevalence of shadow moderation in comment sections, and thus broader support for not doing that; that is, support for transparent moderation. I’m not married to these terms, by the way, so if you have better ideas please shout them out.

1 Like

Hi Rahim,

Glad to have you join the discussion.

I personally don’t want to get into the deeper moral and philosophical issues of this topic, but I have seen multiple cases that are quite clear cut.

Can you cite some of these examples? Without them this is indeed more of a theoretical discussion than an empirical one. Note that some content is illegal1. Such content should be reported to authorities. I assume you are talking about constitutionally protected speech that otherwise causes harm to users.

I’ve linked several cases demonstrating how ongoing use of shadow moderation is harmful and have yet to see an example of a troll who (a) is constantly posting borderline constitutionally protected speech, and (b) won’t go away after being told. Perhaps that is because it is rare.

The hypothetical worst-case scenario is always used to justify consolidation of power. On Reddit, some groups like r/news have one moderator for every one million subscribers who have accounts. The number of readers likely stretches that ratio by at least a factor of ten. That’s remarkable when you consider that the moderators there are anonymous, untrained volunteers. They decide what hundreds of millions of people will see in the comment sections without oversight from even the authors of those comments.

It usually involves a specific person that the entire forum knows as a blatant troll, and the person has been banned on multiple occasions and keeps coming back under different usernames, and that sort of troublemaker invariably uses the TOR network. The same goes for disgruntled users that post furiously profane tirades and awful insults and even disturbing obscene images, or even specific targeted physical threats.

Does this justify implementing a form of censorship that the userbase is not entrusted to know about?

Here is one case where I engaged with someone itching for a fight. You can see from BillHicksScream’s comment history1 that they go around saying provocative things all the time. And yet, they had nothing to say in response to my last comment.

In other cases, giving unruly users the last word may be the best course. You only need to appear reasonable in the face of dogma, as Jonathan Rauch says1. It is not necessary to convince trolls/extremists that they are wrong. They will tire themselves out.

Shadowbans can at least buy some time before that sort of user realizes it and creates another account.

And then what? What happens when users inevitably put two and two together that their discussions are being manipulated because the power grab was too big and too tempting for any individual to resist? Holdouts who have not built shadow moderation into the software, like Discourse’s founder Jeff Atwood, are the exception, not the rule.

There is a cultural shift1 happening in tech right now. Whole teams are being laid off and users are discovering shadow moderation. Honest tech is the way forward. Check out how people react when they discover what’s going on1,

…what is the supposed rationale for making you think a removed post is still live and visible?

…So the mods delete comments, but have them still visible to the writer. How sinister.

…what’s stunning is you get no notification and to you the comment still looks up. Which means mods can set whatever narrative they want without answering to anyone.

…Wow. Can they remove your comment and it still shows up on your side as if it wasn’t removed?

…Wow. I’ve been on here 12 years and always thought it was a cool place where people could openly share ideas. Turns out it’s more censored than China. Being removed in 3,2,1…

So, it backfires. In the minds of users, you are now the censor who says, “users shouldn’t see this, only i [the moderator] have the constitution to endure it.”

Censors, by the way, never refer to themselves as censors, as I pointed out here1. That’s because the word “censor” has been regarded as a bad thing since the days of Anthony Comstock. So it’s often reworded via terms like selective invisibility1, visibility filtering1, ranking1, visible to self1, reducing1, deboosting1, or disguising a gag1. Those are all phrases that platforms and advocates of shadow moderation use in order to avoid saying the bad word, “censorship”. The list is endless because the point is to mask, not reveal, what they are doing.

You say you don’t want a philosophical discussion, yet when dealing with someone with whom you disagree while respecting individual freedoms, it may help to think like a scientist rather than like a lawyer or soldier and be curious about where that person is coming from. Greg Lukianoff calls this “The project of human knowledge” [video clip] [article].

In short, harms brought on by censorship should be factored in. A myopic view of harms caused by trolls without any consideration for the harms of today’s real censorship does not advance conversations about content moderation because you’re only looking at part of the picture. Shadow moderation remains hidden from virtually 100% of the public.

Given the fact that so many users are prevented from helping to mediate discussions when shadow moderation is happening, it’s reasonable to engage them in assistance. To the extent that no attempt is made to involve the community through transparent moderation and counter speech, I’d call that online authoritarianism— a godlike mentality where moderators pretend to know better than everyone else. And ultimately, this will give administrators more trouble than it’s worth, as I think you can see playing out with the periodic turmoil faced by Twitter, Facebook, and Reddit. Users in those spaces have been there long enough to begin to discover the manipulation, and they want something better.

2 Likes

My approach to the situation would be to add a note to the user’s account with Discourse User Notes to give other mods on the site a warning about the user and make sure they are quickly banned for any new infractions. I’ve never come across a situation where the user didn’t eventually go away.

That’s been my experience too, that transparent moderation helps people understand the rules and to decide whether they want to follow them or converse elsewhere. The study I cited regarding mod workload also bears this out.

I think we should be doing our bit to try and make the world a better place. The type of issue I’m describing here is dealing with users who seem (from my point of view) to be dealing with mental health issues. I’d like to know how to help them be better citizens of both the internet and the real world.

It sounds like you’re coming from a good place, Simon! I think the individual who “just won’t go away” after being asked to leave is very rare, but if it does come up I’d say they deserve the same respect we give everyone else in regards to a “right to a trial” of sorts. A full-on trial is not going to happen for every online infraction, so the compromise I propose is that all users deserve to know when they’ve been moderated. Taking that away is like taking away the judiciary. Regarding mental health, even psychiatrists won’t make such assessments without personally examining the individual, so I don’t think we should be doing it either. “They’re crazy” will inevitably become the excuse for taking away the right to trial by using shadow moderation.

If the content is illegal, something that “in context, directly causes specific imminent serious harm”1 (a definition from Nadine Strossen), then that should be reported to authorities.

Finally, if you think self harm is a factor, then you could recommend a crisis hotline. Theoretically speaking, it’s possible that shadow removing comments from users advocating violence towards themselves or others may make them more likely to act on their wrong-headed beliefs.

Consider a group that you consider to be hateful. They might still remove calls for actual violence. But if mods do this secretly, the violent individual is not given any signal that their views were not in accord with the community. Who can say whether that makes them more or less likely to act? They may perceive the lack of a counter response as tacit approval.

My argument is the same as Justice Brandeis’, that sunlight is the best disinfectant1 and that a solution requires involving the community. Rather than shadow removing comments, even hateful ones, you can give them a signal about removals. That signal becomes a chance for them to benefit from some other interaction, be it one from your community, another community, or law enforcement. In doing so, you may provide the fringes of society a chance to feel less isolated, thereby making yourself and the world a safer place.

In general, I try to follow Jonathan Rauch’s advice. He says,

“The person you’re talking to isn’t the person you’re directly talking to…”

Basically, you don’t need to convince everyone you’re right. It’s enough to sound reasonable in the face of someone else who’s being dogmatic or even censorious. While it’s true that someone can legally behave censoriously in conversation by attempting to shout you down with hateful words, they also look like an idiot provided you give them the proper latitude. Any perception that this doesn’t work online may very well be because of widespread use of shadow removals. When you attempt to engage a “hateful” group in their own space, you may be shadow moderated without your knowledge!

In the same talk, Rauch says, “Haters, in the end, bury themselves if you let them talk.” And, he would know what works. As a gay man who grew up in the 60s, described here1, Rauch has apparently been active in the gay rights movement for decades with arguably a good deal of success. So he’s worth listening to on the subject of free speech, no matter your feelings about homosexuality or gay marriage. From the Spanish Inquisition, to Anthony Comstock, to Communism and McCarthyism, to the PMRC from Tipper Gore, the current social media era isn’t the first time free speech has been challenged, and it won’t be the last.

I’ve found Rauch’s mindset to be extremely effective in moderating my own engagement and letting others speak their mind while still getting my point across. I continually discover new ways to accept that (a) I can only change myself, and (b) that this is enough to have a positive impact in the world.

4 Likes

Possibly there are two separate issues here. The first issue is moderators doing their best to deal with problematic behaviour. Shadow banning obnoxious users was suggested as an approach for that here: Discourse Shadowban. I can understand the motivation, but I don’t think it will gain much traction as a moderation technique - if a user is that persistent, it’s just delaying an inevitable conflict.

The second issue is what’s being happening on other platforms where the reach of some user’s posts is limited, without giving the users any indication about what’s going on. What’s notable is that these users have not been in violation of the service’s TOS. It would be technically possible to create a Discourse plugin that would implement something like this, but unless the site was extremely active, I think it would be quickly discovered. I actually think that Discourse and other decentralized community platforms are the antidote to this issue. As an extreme example, if someone wants to create a community where saying “the earth revolves around the sun” is considered to be dangerous misinformation, they are free to do that. People with a heliocentric world view can just find a different community to join.

A further issue would be if it was impossible to find hosting, or a payment processor for a community that was going against the dominant narrative on a particular issue. Other than trying to promote the values of intellectual curiosity and diversity, I don’t have any great suggestions for how to deal with this.

Maybe, but also, coming up with solutions for real problems is a good business strategy. How to help people with their troll-type behaviour, as opposed to just banning them, feels like an unsolved problem. I can imagine Discourse providing some tools to help communities with this. It could also be outsourced to an external service that was staffed by trained therapists.

5 Likes

Simon, I appreciate you taking the time to reply. I can see you are not persuaded by my argument that trolls can sometimes be left alone (that is, via transparent moderation as opposed to shadow moderation), nor do you appear persuaded that shadow moderation may tempt Discourse with increased usage. Maybe that’s true, maybe not. I think time will tell, but not necessarily soon. I strongly disagree that shadow moderation gets “quickly discovered”. I do agree that promoting intellectual curiosity and diversity is not to be undersold, and that good business strategy plays a fundamental role.

2 Likes

Cross linking the official Discourse position here

12 Likes

Maybe I’ve been too cautious with expressing my thoughts here:

I think shadow banning is a terrible idea. That said, I have lots of concerns about the future of the internet, but as far as Discourse goes, I’m not concerned that shadow banning or the surreptitious manipulation of a site’s content is something that will catch on. First, as noted above, the functionality doesn’t exist in the core Discourse code and the team have no plans for developing plugins for it. Secondly, I can’t see a valid use case for adding this kind of functionality to a Discourse site - I can’t imagine a case where implementing it would benefit a site’s owners.

If there’s one other thing I could promote, it would be the value of disagreement. Promoting disagreement, as opposed to agreement, is the quickest way to pull in all available view points from a group of people.

5 Likes

I know there are situations where a shadowban plugin / feature is needed.

We have a paid program and 99% of our students love it. Then we get some bad apples who go into the forums and “poison” the others with negativity. Now these people have paid for something so the bad apple is spreading fud and new people come in and they see the FUD so they ask for a refund.

The bad apple also paid so we don’t want him to refund either because it’s usually these types of folks that suck up most of our time.

My staff spends hours and hours dealing with the bad apples and preventing them from poisoning the good apples.

So yes… we just want to shadowban ban the bad apples so the support team can respond to their posts and replies and they think they are “In” and life is merry and happy for our support team and our awesome users don’t get poisoned.

Before you say “Just refund the bad apples”… no that’s not the answer.

We need a shadowban feature for Topics and replies and per user if we permanently shadowban them.

There is a legit use case for this feature.

2 Likes

Out of curiosity, why would removing the undesirable members from your community not be the answer in this situation?

Wouldn’t it also be more of a problem if/when they figured out they were paying for something that they weren’t fully getting? (ie. participation in your community)

5 Likes

They would refund. They get a lot more than just the community though. The forums are just a small part. Software, tools, education. What they don’t get to do is FUD others in the community and cause others to doubt or want to refund. After doing this for 20 years we know that you have a certain percent of people that are going to be negative about everything in life so we do need to protect our paying customers that are taking action.

If it was just a forum they were paying for that would be different… but this is a 2K product so we have to protect our payment members for the negative nellies.

2 Likes