Combining trust system and pre-moderation

So let’s assume your community wants pre-moderation of all posts by members with a trust level of 0.

One option is to have all 0-level posts enter a queue for admin approval. This is effective, but is completely dependent on the admin’s schedule, and can be frustratingly slow.

Another idea would be to publish all 0-level posts immediately, but require a click to display the post. If enough users read the post without flagging it, or if a trusted user liked it, the post would be displayed normally, for all to read.

Any thoughts? Could that be a useful way to prevent untrusted users from disrupting a conversation?

5 Likes

This is an interesting idea - in effect the “flag” permission/trust level could include a pre-screening method.

Why not go ahead and display the post (without a click) with a special color/format (grey on grey) only to users of the “flagging” trust level, indicating that it hasn’t been approved, and provide a single click to approve or flag it. That’s what you’re really looking for - a pass/fail snap-judgement. Like with flagging, keep a count of “approved” and when the threshold is reached, it becomes a full fledged post.

I’m not sure how this would interact with the new posts indicators, though… And there are probably other side effects.

That could work… it would be less important to “hide” the text if it was otherwise clear that this was an un-vetted post from a user who may or may not fit the established community norms. Allowing trusted users to quickly approve or disapprove these posts could also accelerate the determination of trust for the 0-level user, as users would be quicker to execute the simple pass/fail actions than they would be the more fraught like/flag actions. Hmm!

The idea of allowing trusted users to flag/approve new posts does take the load off the admin/mods, and I’ very much like the idea.

With my paranoid cap on, here’’ a question: Could it be abused? Not sure exactly how, but you do have to consider all angles.

Preventing abuse of the trust system would be crucial for Discourse in general, right?

I’ve had some success with preventing same-IP like/flag actions, and I’d see that applying with pass/fail approvals, as well… if you share an IP with a 0-level user, you can’t approve their posts.

Obviously a single IP can represent a lot of different people, but I thought it was a reasonable enough tradeoff to miss a few legitimate inter-office or inter-school thumbs-up if it made it more obnoxious to game the thumbs (or approval) system.

Another thought… even if this was a reasonable pre-moderation compromise, there would still be a few posts that would preferably receive admin attention before being published in any form.

Simple word/phrase matching might be enough to keep the really terrible stuff off the site, but even better would be some form of rudimentary semantic analysis (oh snap!) determining the level of anger/vulgarity/all-capsiness of a post before allowing 0-level contributions up for community vetting.

Piece of cake, right? :wink:

Well, one problem you will run into is that a lot of basic trust users may like the “what is it like to kiss Angela Jolie” topic, even if that is not the sort of topic you want on your site.

I’d imagine you would only want very highly trusted users, who have been around a long time and know the tone of the forum, approving topics.

1 Like

Hey Jeff, I was thinking about responses/comments here. Trying to think of a way for a community protect itself from disruptive newcomers without having to wait for flags…

By having a customized trust level for “see and review” new messages, forum operators could set that to any level - level 1 by default and mod level if they want to lock it down.

Though I’m no fan of symmetry for symmetry’s sake, I’m not seeing why this kind of approval inherently requires more trust than flagging-to-hide. In short - if a group of users can be trusted to provisionally hide stuff - why can’t they show new stuff as well? Anything “inappropriately” approved using this process can still be hidden by flags or moderator action.

Yeah sorry @aja I was thinking of the previous discussion about topic vetting.

So there are two things here:

  1. Are we talking about replies, or topics?

  2. Are we screening content before anyone can see it, or after people have seen it?

Topics are much more dangerous in my opinion, since a front page filled with inane and ridiculous topic titles ensures that nobody is going to stick around on your forum, whereas the occasional inane reply to a topic isn’t that big a deal, and is easily ignored by the overall community. Now, if every reply is inane, or if there’s a massive influx of constant noise in replies, then you certainly have a problem. But the system can tolerate quite a bit of noise in replies without breaking down, whereas noise in topics is far deadlier.

I am much more open to strict moderation on starting new topics, because it is safer and easier on every level. There’s less topics, they are far more important to the structure of the site than individual replies, and so forth.

It is very different, it is the difference between “guilty until proven innocent” and “innocent until proven guilty”. HUGE difference, just try committing a crime in one system versus the other :dizzy_face:

But there’s the catch-22 – how do you know someone will be disruptive, unless you let them post first to see?

I think it’s a huge ask to require the community deal with every new user creating “invisible” posts that other community members have to somehow like or otherwise support before they can be visible at all. Huge, huge ask, because if they aren’t actively “liking” or somehow vetting these posts, you will have no new users. It’s kind of like never ending, ongoing work… your users must commit to constant “like”-ing of new user posts, not just occasionally flagging things that are out of bounds.

It also reminds me a little of hellbanning all your new users: Suspension, Ban or Hellban?

Another idea that came up in another topic here is for new users to be isolated to certain sandbox categories, and they can only escape the sandbox if they gain trust levels. But that’s also complicated, and has a Lord of the Flies aspect to it, because what sane person wants to go into the sandbox with all the newbies and duke it out?

I understand the goal, but I’m not sure this is the way. Pre-moderation of topics is very strong, has almost no downsides. But pre-moderation of all new user posts just makes my spider sense tingle and not in the good way…

My thoughts on new topics haven’t changed; for communities that need quality over quantity, admin approval for most new topics is a huge help.

My main concern right now is the impact that disruptive or low-quality replies can have on an otherwise civil and relevant conversation. The first week of a debate on gun control is civil and constructive, but eventually is overrun by people who want to ramp up the rhetoric. A discussion around the scientific process is eventually taken over by pseudo-science enthusiasts. Etc.

Indeed.

So, the way I’m picturing this, after reading Randall’s great response, is that the 0-level posts would be visible to all, immediately. They’d have a different coloration that made them less prominent than normal posts, but they wouldn’t be invisible.

So then what’s the point if they’re still visible? IMO, a system like this could go a long way towards:

a) getting the community more active in self-moderation: people prefer to take positive actions, like approving an entry, not negative ones like flagging. A post has to be pretty bad before most users are willing to flag it, and if you have a relatively inactive community in the first place, the flags may not happen until it’s too late.

and b) signaling to readers that 0-level users don’t necessarily speak for the community, and that their posts should be taken with a grain of salt.

In other words, say you have a community that’s focused on discussing quantum physics, but 0-level users keep wanting to talk about telekinesis. With the option to require this sort of immediately-visible-pre-moderation, the trusted users would have a way to quickly and gently discourage these types of posts, without needing to flag a specific rule violation.

also, c) you could build a LOT of data about a new user’s trustworthiness pretty quickly this way, plus it might motivate established users to be more aware and welcoming of good newcomers.

Here, does this ugly comp make any sense? :smile:

3 Likes

(Do note that new users already have a grey username to let you know they joined in the last 2 days. We added that a while ago. So there is a tiny bit of context when you see a post by a user, if their name is grey and not a blue link, you know they are a new user.)

Indeed, but isn’t clicking “no” to the question “is this an appropriate contribution?” a negative action?

Sure, unless a few of your so-called trusted users are keen on telekinesis. :wink:

Overall I like this mockup a lot, and it certainly makes sense as a way to get from trust level 1 to trust levels 2 and beyond. Curious what @frandallfarmer thinks.

It certainly is, but I think it’s a degree or two less negative. “Is this appropriate? No.” doesn’t have the same “running to teacher” feel as flagging.

Also, if a lot of users have “seen” the post, but not clicked “Yes”, does that say something about the post? What if users who are usually quick to click “Yes” read the post without choosing Yes or No?

Seriously, right? That’s where I’d love to see a trust system be able to determine the difference between an active user who’s trusted enough to post and to make some decisions (this is spam) but not trusted enough to set the tone of the community (their “likes” don’t count as much, their approvals don’t count as much, etc).

If you look at the screenshot mock ups @riking this is not the same thing – what is being proposed here is peer vetting of new user posts.

It is true we do have a switch to turn on staff vetting of new posts now (pre-moderation) but they are very different concepts, and the work goes to very different places, in each case.

1 Like