Shadowbans are bad for discourse, and here's why

Morality aside, I can’t see how this is going to work in a small community. If the users are at all tech savvy, they’re likely to figure out what’s going on. Even just googling “Discourse shadowban” will take them directly to this topic.

3 Likes

This is an important reference!

Lots of interesting talk in this topic, few different main ideas to consider.

With Discourse the feature to be able to mute/ignore someone could be considered a form of shadowban, but only for an individual not everyone else.

With the post flag system this does not necessarily honor habeas corpus if moderators are taking action in response to flags while the individuals who have flagged something remain anonymous. However this is at the discretion of moderators how flags are reviewed, ideally their decisions are based on their own interpretation of what is being flagged rather than flag reports themself.

There is an opportunity for diplomacy if a community member chooses to identify themself as being the person who has reported something that has been posted with a flag, either for just being off-topic or inappropriate or flagged for some other reason. However then there is a risk of there being an escalation to more of an argument that may not accomplish anything besides making people upset.

In general the reason why platforms may be implementing shadow-ban category measures could be for de-escalation/avoidance of arguments/conflicts, which can seem reasonable from a community management perspective of wanting everyone to get along and agree about everything. However when there are legitimate disagreements about something, it can be better to have space for people to express why/if they are upset so they can be better understood.

1 Like

That makes sense the way you describe this, although not everyone may agree with that.

For the feature development you posted in marketplace, is that for an open-source solution or just specifically for your use case?

1 Like

Do you mean that the customer would ask for a refund? That’s unlikely if they don’t know they’re being censored. And it would be unethical for you to continue accepting their money while also censoring them without their knowledge.

If, on the other hand, you mean that you would refund them, then there is no need to secretly censor that user at all. If their account has been cancelled, then they no longer have access to the service.

1 Like

You’re presuming that users know they’re being shadow banned, which in my observations, they most often do not. Active commenters are generally unaware when they are shadow banned.

Shadow bans are commonly used across many major platforms because users either do not know about it, or they do not think it happens to them. Users therefore waste time commenting while platforms collect ad revenue based on higher user counts, which they achieve with the help of deception.

3 Likes

Seemed like they meant a refund would be given if someone discovered they had been shadow-banned and requested a refund because of that, however still if that is their policy that could be a possible lawsuit risk if someone had been paying for a platform where they think people can read what they post but it turns out that was a lie.

2 Likes

Seemed like they meant a refund would be given if someone discovered they had been shadow-banned and requested a refund because of that, however still if that is their policy that could be a possible lawsuit risk if someone had been paying for a platform where they think people can read what they post but it turns out that was a lie.

I agree with you that there are risks to using shadow bans, either via lawsuit or to the trustworthiness of your business.

But even without a lawsuit, the goal here was to determine whether shadow bans have any valid use case, not to demonstrate how to deceive customers to get their money. The answer to the valid-use-case question is even easier when you are talking about paying customers. You simply stop accepting their money, no shadow bans necessary.

4 Likes

Potential use-case for a shadow ban is for/with if anyone’s intent seems to only be to cause disruption to a community site, such that if they are banned openly then they will want to create a new account regardless of the ban.

If they can be identified who is trying to do that then one could simply refuse to approve new account application, true.

Shadow bans do not defeat trolls, they empower them. The number of people who would spend the necessary time to circumvent bans is relatively small, and those determined bad-faith actors are more likely to discover shadow bans than the average good-faith user. Even on Reddit, which has advanced techniques for identifying linked accounts, moderators still regularly complain in r/ModSupport about users who circumvent both bans and shadow bans. What you want, then, is not deceptive tooling, you want to grow trustworthy communities. That takes more time, but it is better than founding a community on distrust.

When you introduce shadow bans into your community, trust is out the window. You therefore empower trolls who have no qualms about using deception to “moderate.” The trolls, now in charge, end up elbowing out good-faith users who never imagine a forum would do this to them. In other words, you become the king troll. You would not want someone to secretly remove your own comments without your knowledge, so it makes no sense to do that to others, nor does it make sense to empower others to do it.

Regarding bots or spam, shadow bans do not fool bots, only captchas fool bots. Shadow bans only fool real people.

3 Likes

Well for a trusting community it would be obvious if an account had been muted.

Well for a trusting community it would be obvious if an account had been muted.

It’s not trustworthy if you’re using shadow bans, but for the sake of argument, to whom would this be obvious? Not to the secretly muted user, nor to the community, since every member will not comment on every topic. Keep in mind that a shadow ban is often applied to individual comments, not necessarily to a whole account.

Well for a community of say twenty people, if they have any communication outside of the shadow ban app, within a few days most people would be able to realize not everything they have written is being published.

Talking on the phone one could simply ask “Hey did you see that comment I wrote?”

Well for a community of say twenty people, if they have any communication outside of the shadow ban app, within a few days most people would be able to realize not everything they have written is being published.

I doubt it. This practice started with the beginnings of the internet and ballooned to what you now see— platforms with billions of users.

Either way, it’s worth researching. Someone should be studying the impact of shadow bans. To date, I haven’t seen such a study.

This would be in the category of comment editing/filtering, “ban” means complete censorship.

“Ban” just means something is not allowed. That can be a keyword, link, topic, account, or just one comment or viewpoint. The term “shadow ban” first came to the public’s attention in the context of bans that were secretly applied to whole accounts. However, platforms also fail to disclose tons of actions against individual posts to the authors of those posts.

The shadowy aspect, where the author of a filtered/throttled/demoted/removed post does not know someone took action against their content, is really the most important part, and should not be left out. I prefer the all-encompassing term “shadow moderation,” however I often use “shadow ban” because many people have already seen that word in the news. See this WaPo article, for example, which uses the term “shadow ban” to describe actions against both whole accounts and individual posts.

1 Like

Ok I agree that can be a problem, with Discourse that is up to moderators if they want to let someone know their posts are being edited/taken down as a courtesy it is usually polite to do that. With the flag system there is the automatic notification if a post has been flagged and then the author is given a chance to review/edit post after “cool down” delay of ten minutes, this is a sophisticated system.

With massive platforms like google youtube things are different, I have seen that if someone is banned from a channel this means their comments are being censored but they are not being notified about that.

Yes, it is up to the platform. I do not argue for a legislative remedy to tie platforms’ hands. That would just give government the power to censor, which is precisely what the first amendment guards against. Plus, a legislative remedy might not apply to social media services headquartered overseas. Or if it did apply, it might be hard to enforce.

Reddit and Twitter/X were once small, yet they entertained this topic too, for example here in 2007:

A better idea is a silent ban. Let him post comments, and show him those comments, but just leave them out for everyone else.

Thus, platforms may have grown with the help of shadow moderation. By using it, platforms mislead users into thinking they are able to share their viewpoints without being censored. In reality, users waste their time writing while platforms rake in advertising dollars.

You are using past tense, but Twitter/X still does this in 2023.

4 Likes

Language differences are interesting, or is this more matter of culture, but quite widely only state/govenment can do censorship. Private parts never. And then moderation, is it transparent or shadowed, is not part of censorship and term censoring is used mostly only in… toned meaning.

But keep going, this is a interesting topic and I’m trying to follow.

2 Likes

Yes, I wish people would focus on this. That is, how content gets removed, rather than what gets removed. It would be a simple code change to reveal existing shadow bans, and the CEO already promised to do it a year ago.

The hold-up, X’s former head of Trust & Safety Yoel Roth says, is that X must also explain the reason for the shadow ban to users. He says that since these explanations are in free-text notes that were not written for the eyes of the public, they must first be categorized before people can be told about their shadow bans.

I don’t buy that explanation. “Bare notice,” as described here by one researcher, is separate from explanations, and the need for categorization is no excuse for continuing to deceive users.

Whenever I see people misrepresenting X’s CEO as the new free speech champion, I try to clarify the facts, but I do not have much of a following. Personally I don’t think we can rely on any individual or organization to provide free speech for us. We must each take that responsibility ourselves. Certainly we should unite where possible, but if you’re waiting for someone else to protect your speech, you’re at risk of being misled.

2 Likes