All past and current platforms share moderation failures.
In an idea shangri la community moderation is conducted on the individual level internally without the need for members of authority or flagging systems…However humanity has not reached that level of enlightenment.
Putting neutral controls in place to help reduce personal bias in moderation is key. As often demonstrated in government without controls or after thought audits due to lack of controls. Often power corrupts even the most well intentioned individuals. Though not all.
But Tolkien did say even the purest of heart will eventually start to be corrupted by the one ring.
Removal of temptation removes the possibility of faltering.
I have been around since Fido networks during the Dos BBs days. History repeats because of both not learning from the past and limiting our future with tunnel visions.
At a base Moderators are rated by the community on a quick scan.
Some Companies have workers evaluate their supervisors and based on a number of worker reviews combined with the Supervisor"s manager review gets combined of sorts.
Really not a bad system to foster growth on both levels of member and moderator.
I have a multiple occasions members have approached me on a moderated action They knew I was involved in and provided feedback when they have felt I was unfair. And they were successful…So responsibly I apologized to those involved and the community accepting & owning my fallibility. - This gains a moderator credibility by demonstrating we are just as human.
I wanted to set up a news site with inevitably some political topics.
Initially I would be the sole admin/mod.
I was concerned I’d spend a lot of my time moderating.
Users flagging would be a big help, but also a hinderance (false flags).
If users held politically diametric views they could attempt to use the flag system as censorship of an opinion they disagreed with.
Flag integrity would be a huge help.
If the false flaggers got a negative weighting. After a number of false flags their flag could be given no weight.
Correct flags a positive weighting … that gave their future flag more credence.
Flag integrity would be helpful in my case.
If you have teams of mods less so.
I hadn’t even considered Mods posts being flagged. Seems an edge case.
Also as I got zero likes for my original suggestion until the OP gave me one ( thanks) it’s probably also an edge case problem/solution.
I’m amused, or is that saddened by how many specifics are set out in this thread that are examples that my attempts to start discussion of the general mechanisms in societies/ the sociology of digital community … & thus isolate cause, effect, hypothetical responses, design guides and patterns/ anti-patterns.
Well understood problem domains can have simple solutions (high degree of coherence in advance). Complicated domains necessarily have many components that with care and precision leads to solutions but normally fragile. Complex domains represent case where vision of the relevance and relevant components has not been isolated and thus need an exploratory approach - coherence can only be detected in arrears - [This is the observation that sits at the philosophical core of the agile software development movement and thereby generates concrete artefacts such as the daily standup or the product owner with a backlog that is groomed on the basis of user needs]
If a moderation system is set up with only the capacity to deal with simple then there is never the opportunity to move the complex through understanding, past complicated and into the realm of simple.
I’m sure I read somewhere aspirations about next generation online community within meta but I currently see that as precluded by the moderation stance. The flag integrity mechanism may have relevance but is probably in the complicated / fragile category. Another relevant component would be the freedom to discuss the sociological drivers that are not yet clear enough to be put in a highly coherent thread but would get there in the future
Your quite welcome even though pur ideas have perhaps some different directions. But are related as well.
Political topics much like we experience here with differing pov & philosophies on how to achieve similar goal/end points. Creates alignment issues.
There is a flag integrity of sorts based on how a moderator resolves a flag.
for example. Flags a moderator agrees with on a particular user will keep their flag score at 100%.
If you have a user for example that you almost always disagree with their flags their flag score drops in %.
On the one forum I am a volunteer admin we had a very dedicated troll that would post tons of links to other posts on the forum as a new user. The system would then flag the new user and all linked posts. Only the new user in question was the issue.
However we would then have to disagree with so many system issued false flags the system flag score dropped to 50%.
So moderators are incentivised to find things to complain about. This is a memetic mechanism that probably explains part of the culture of meta and by extension through Conway’s law other communities and may come from reddit and source-something (forge?) both of which I’ve heard can be more penal than rewarding.
It also explains why a post I wrote had the use of the word ‘catholic’ with the lower case c AND quoted in single quotes - Which means illustrative of the rich variety of life - was treated as if I had written the word Catholic with a capital C which is a reference to a religion.
You get the behaviours you incentivise, whether explicitly or as unintended consequences
Discord Servers usually use a dump category/channel of little to no moderation a “Free4all” channel that is not publically viewable either requiring proof of age or fir a user to apply for access. Though usually proof of age on Discord.
Discourse Meta having the openness of being extensible through plugins & theme-component is a very positive forward thinking and a very large reason for it’s success.
Keeping open discussions of philosophies is also key with healthy debates of differing pov. It is only when debates are misread and closed that can slow things down.
There is a flag integrity system. However it is lop sided when there is no option save babysitting to ensure in cases where mod selection is not in dare I say competent hands. I am speaking only to my case scenario and others who are in an admin position more as a maintenance/design control. Where a company decides who gets mod and relies on system controls vs personal integrity.
The Discourse moderation system has a weighting system that works in the way you are describing. The likelihood that a flagged post will be hidden is based on the reliability of the flagger’s previous flags. The score is determined by the percentage of times that a user’s flags have been agreed with by a staff member.
I don’t believe that the flag weighting system is documented on Meta. One of my goals for the coming week is to try to document it.
To re-iterate @JammyDodger’s points, I don’t think this topic belongs in the community category. This is a category for discussions about launching, building, growing and managing communities My assumption is that the category is primarily intended for people who are managing Discourse sites. I know that I stretch that quite often myself, because I’m interested in a lot of ideas related to online communities.
The OP contains a valid suggestion that could be worked through a bit and posted in the feature category.
The complaints about Meta’s moderation have been noted. The community category is tricky, because Meta is an obvious site to use for examples, but the category is not the appropriate place to complain about Meta’s moderation.
From my own point of view, user generated groups would be useful for these types of discussions. If discussions could occur in a group that I was free to join and leave, I’d be happy to participate in them. Other than the complaints about Meta’s moderation, I’m interested in a lot of what’s being discussed here.
I’m going to close this topic. Keeping the above points in mind, feel free to start new topics that branch off from the ideas introduced here.