Shadowbans are bad for discourse, and here's why

In English, anyone can censor, private or public. We even say people can “self censor.”

I argue here that the true censorship of the digital age is shadow moderation. Feel free to comment over there or here if you agree or disagree.

1 Like

Yes, and no.

  • These platforms have been under heavy scrutiny for not burying certain posts.
  • OTOH I personally am not very happy with the fact that a foreign company or person is able to influence our democracy by removing posts according to their (own) rules

So I guess this would be a good middle ground. Better than any of the above.

1 Like

I think we are in agreement. I support platforms’ right to remove what they want, even posts from users complaining about censored content. However, they should inform authors of the removed posts.

Yes, at the national level, the shared risk is obvious. But it’s also apparent at the local level. Imagine your home owner’s association has a chat group, and you write a comment critical of its leadership or its dues. If the leader of that group can secretly axe your comment, they can then run a campaign against you. And their criticism of you will appear to precede your criticism of them, making you seem like the child. Does that make sense?

Point being, you don’t want this style of content moderation anywhere, not in community groups, school groups, and especially not on global platforms. Yet that is our present reality. And despite widespread concerns about social media, I have yet to convince any established civil rights or free speech organization to speak publicly against the use of shadow moderation.

This is overly simplistic, but I imagine two groups, technologists and free speech experts, who each hold important expertise, yet are each missing the expertise of the other.

(1) Technologists incorrectly believe that censorship is the right way to deal with trolls.

(2) Free speech experts incorrectly believe that shadow bans are necessary to deal with bots.

Somehow the gap between the two must be bridged so they can benefit from each other’s expertise.

1 Like

I’ll pass because then we have to start draw lines if mis- and disinform plus fantasies by troll factories should allow in the name of free speech. And then we will drift strongly out of purpose of this platform, aka. off topic.

1 Like

No sweat. “Troll factories” sounds like a good term for social media platforms that secretly manipulate other people’s conversations. :laughing:

This is an interesting scenario you describe here. That is getting into deceptive social manipulation, which can create all kinds of problems and misunderstandings.

I’m not amused. Partly because our eastern border will be closed in three hours time. And one big troll factory just told in X it happends because of Nato has taken over Finland and is planning sneak attack. But hey, that shouldn’t be censored in the name of free speech.

Troll factory is a valid term. Didn’t you know it?

And I’m off from this topic.

1 Like

I apologize for the joke, but I didn’t say nothing should be censored. I said secret censorship empowers troll farms to the point where they control platforms, which appears to be what you are describing. We need transparency to get everyone on the same page.

1 Like

Yes, multitudes of manipulation are made possible when you admit even “exceptional” use of shadow bans. I’ve come across countless smart people who think exceptional use of secrecy against allies is acceptable. To be clear, this is the predominant mode of thought in the world of content moderation within social media companies, among academics, and even at some major institutions that claim to support free speech.

People do have rights to privacy as well as freedom of speech in many countries as well.

For peer-reviewed academic papers and legitimate print publications these have to be reviewed before they are published, this doesn’t mean people don’t still have the right to speak freely.

All I’m saying is there are people who study content moderation, yet they do not study the impact of shadow or non-disclosed moderation. That is a huge research gap.

You’re correct to point out that people are free to study what they wish. In my opinion, the lack of research on shadow moderation is a glaring omission and a huge opportunity for any researcher to take up.

1 Like

Well sure there is an oppourtunity for research there, but if the data is being concealled from public view then studies aren’t going to be accurate without surveys for people to report their experience with shadow-bans, but as you’ve mentioned they may not even know that is happening.

Anyway that would be going beyond what this support site is for, unless that is focused specifically on the Discourse app.

With this being open-source for independent sites it is not in the same category as other closed-system apps.


Not necessarily. A researcher could build tools to track what’s been removed or altered. That’s already possible on Reddit as I demonstrate with Reveddit’s website and extension. Generally speaking, an extension should always be able to track publicly visible date-ordered comment sections. And, public forums may be where lion’s share of shadow manipulation occurs, since that is the most visible and thus most shared content.

Plus, one could also publish research on interviews with people think of shadow moderation on various platforms. You don’t necessarily need to know what’s been removed to publish research. Just knowing how it’s done is enough to contribute something interesting.

This is a general discussion forum about community moderation. I don’t think it’s off topic to discuss content moderation research.

I wouldn’t say so. Someone running open-source can add extra code you don’t know about. There’s no guarantee they’re running the original codebase. The best way to run a trustworthy community, in my opinion, is to declare that you do not use shadow moderation and are transparent about all mod actions to content authors.


That is a good place to start to make a declaration.

Wombat hereby declares under penalty of perjury to not practice shadow-moderation.


If it can be proven that one specific individual or organization is doing that, you can send them a legal notice that if they don’t stop you can/will file charges against them for harrasment/disruption or just using web services without permission people can be prosecuted for that.

That only works if you are named, not anonymous. Anonymity has its place for challenging existing policy, but we expect to know the names of people who run forums we call trustworthy.

The anonymity in this case is because the account is suspended rather than the author wishing to be anonymous.

1 Like

The account’s previous name was still anonymous; it was not a real name. Also, we don’t know what forum he manages. If someone wants to make the declaration that they do not use shadow moderation, they should say it where people know their identity and affiliation.

That does not mean all users must be identified, just that the forum manager’s name should be known so that someone can be held to account for the site’s alignment or misalignment with its stated behavior. For example, Ben Franklin’s “Mrs. Silence Dogood” letters were published anonymously with a newspaper run by his brother. The Federalist Papers and Cato’s Letters were also published anonymously but carried by papers of known repute. So you can still allow for anonymity and valuable contributions to discourse provided someone puts their name behind the distribution.