I’ve read some encouraging posts here on the topic of moderation, such as @codinghorror’s here1, and I wanted to throw my hat in the ring.
Recent disclosures by Twitter1 have reinvigorated debate over the use of shadowban-like moderation tools. Are they good or bad for discourse? With this post I hope to engender some debate among mods and users about the use of shadowbans.
My hypothesis is that transparent moderation fosters a prepared userbase where mods and users trust each other, and shadow moderation creates more trouble than it resolves. I therefore argue that things like the shadowban should never* be used, even for “small” forums like Discourse installations.
About me
I’m the author of Reveddit, a site that shows Redditors their secretly removed content. Reveddit launched in 2018 and since that time I’ve been outspoken on Reddit1, Hacker News1 and more recently Twitter1 about my opposition to the widespread use of shadowban-like tools, which I call Shadow Moderation. In case there is any doubt that some form of shadow moderation is happening on all of the most popular platforms, please also see my talk for Truth and Trust Online 20221, slides 8-9 at 5:50, or my appearance on Using the Whole Whale1. If there is any doubt that the practice is unpopular among users of all walks, please see the reactions of thousands of users in Reveddit’s FAQ1.
Preface
For reference, here are some existing definitions of shadowban-like behavior:
- “deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster”. - Twitter blog1
- “A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost.” - Coding Horror1
And here are two definitions from me:
-
Transparent Moderation: When users are notified of actions taken against their content.
-
Shadow Moderation: Anything that conceals moderator actions from the author of the actioned content.
Note that when platforms talk about transparency, they simply mean that they clearly delineate their policies. This is not the same as transparent moderation.
And to be clear, I do support the use of moderation that is transparent. I do not support government intervention in social media to enforce transparency upon platforms, such as is being attempted by PATA1 in the US or the DSA1 in Europe (Platform Accountability and Transparency Act and Digital Services Act, respectively). Therefore, I also accept the legal right for platforms to engage in shadow moderation.
The case for transparent moderation
The case for transparent moderation is best defined as a case against shadow moderation.
Shadow moderation goes against habeas corpus, a thousand-year-old concept that the accused has a right to face their accuser. The values of habeas corpus are enshrined in the U.S. Constitution as an amendment (the 6th), but it is also a basic dignity that we expect for ourselves and therefore should give to others. Simply put, secretive punishments are unjust.
Now you might say, “but these are private platforms!” Indeed they are. Yet in private we still strive to uphold certain ideals, as Greg Lukianoff from FIRE explains here.
John Stuart Mill made the same case in On Liberty where he wrote, “Protection, therefore, against the tyranny of the magistrate is not enough: there needs protection also against the tyranny of the prevailing opinion and feeling.” 1
Now you may say, “We need it to deal with bot spam!” This will only work for a short time. In the long run, bots will discover how to check the visibility of their content whereas users mostly will not. Therefore, transparent moderation will work just as well for bots, and shadow moderation only hurts genuine individuals.
Perhaps you will say, “I’m only using it on a small forum.” And there I would ask, is it really necessary to resort to deception? Getting comfortable with shadow moderation on a small Discourse installation may be a stepping stone towards supporting its use on larger platforms, and the harm there is clear to me. Every side of every issue in every geography on every platform is impacted. Over 50% of Redditors have some comment removed within their last month of usage. It interrupts untold numbers of conversations, perhaps due to a fear for what will happen if hate is “let loose.” But Reveddit has existed for four years, and that has not happened. Instead, what I’ve observed is healthier communities, and I link many examples of that in my talk.
Shadow moderation at scale also empowers your ideological opponent. Groups who you consider to be uncivil are more likely to make uncivil use of shadow moderation. Any benefits you have while in the empowered position to implement such tooling will erode over time. And as long as you make use of shadow moderation, you will not win arguments over its use elsewhere.
To my knowledge, little research has been done to study the effects of shadow moderation. I know of one study that indicates mod workload goes down as transparency goes up1. Yet we do know that open societies thrive as compared to those that are closed off.
Left unchecked, shadow moderation grows isolated tribal groups who are unprepared for real-world conversations where such protections are not available. That can lead to acts of violence or self harm because the harm inflicted by words is real when we fail to accept the limits of what we can control. We can instead choose to respond to hate with counter speech, and encourage userbases to do the same.
Finally, new legislation cannot solve this for us. Any such law would abridge our individual freedoms and have the unintended consequence of either (a) empowering the very entity the bill of rights was designed to protect against, or (b) not working, since there is no way to enforce transparency without giving the government access to all code. So you’re either back to square one or worse off than when you started. Support for habeas corpus and therefore civil discourse is best achieved through cultural support for actual transparent moderation. Anything short of this would be a bandaid covering up a festering wound.
Conclusion
In conclusion, one should not take the position, “he’s doing it, so I have to do it in order to compete on equal footing.” Consider instead that you may both have poor footing. Discourse installations and other forums who do not use shadow moderation have a stronger foundation. Their userbases are prepared, mods and users trust each other, and therefore both feel free to speak openly and honestly. And, given the widespread use of shadow moderation, it is now more important than ever to vocally support transparent moderation.
Appendix
One as-yet-unreported use of shadow moderation on Twitter may be when Tweets are automatically hidden based on mentions of certain links. I discovered this when I shared a link to my new blog for which I had bought a cheap $2/year domain with the TLD .win. After sharing it via Twitter I noticed my tweets were showing up for ME but NOT for others. For example, this tweet1 does not show the reply from me, even when you click the button “Show additional replies”. To see my reply, you have to access the link directly1. Effectively, that reply is not visible to anyone who does not already know me, and the system conceals that fact from me while I am logged in.
* The one circumstance under which I think widespread censorship is useful is when society has forgotten its harms. In that case, it inevitably returns, if only to remind us once again of its deleterious effects.