Should users gain/lose trust based on flag actions?

(Christoph Rauch) #1

That is sensible, but what can the system do in case of a rogue moderator? Can users penalize moderators, akin to the meta-moderation framework of Slashdot perhaps?

0 Likes

So What Exactly Happens when you "Flag"?
(Tomasz P. Szynalski) #2

Why should you lose “a lot a lot” of trust just because you made an appeal that was rejected? Some of these may be borderline cases – suppose you write something that is mildly off topic or perhaps provocative bordering on trollish, some users flag it, you appeal to a mod, now the mod has two choices:

  1. Side with the flaggers. Result: The poster gets a huge penalty.
  2. Side with the poster. Result: The flaggers get a hefty penalty.

In reality, no one deserves a big penalty because the case isn’t clear-cut – both sides could argue their case reasonably well. Again, I’m thinking about situations where the mod would say “I can see how you might have thought your post was OK, but sorry, I can’t allow it”.

Solution:
The penalty should be at the mod’s discretion. It should be possible to disallow a post AND let the poster off with a warning.

10 Likes

(Thomas F. Burdick) #3

I think this will be a pretty important use case. As a moderator, you might be happy to allow the flagged post under normal circumstances, but you’re trying to tamp down an incipient flame war. It would be a shame to have to choose between alowing the unedited post vs significantly penalizing the user, if it’s a pretty minor offense.

1 Like

(Jason) #4

I agree, I know that I would personally be significantly less likely to take action if it is damaging someone’s trust level especially in the case of users new to a site who aren’t too familiar with how it operates.

Being able to report moderators wouldn’t be a bad thing, however I don’t think that actually penalizing them in some way would work for many forums. Particularly for forums that are small to mid size it might be preferable for an admin or higher ranking moderator to take a look at it and decide what to do rather than allowing the community to decide on penalizing the mod. I don’t mean to say I think it’s not a good feature to have available, but if there is a penalty system for moderators then it should be optional in my opinion.

I know that I personally have taken some not so popular actions as a forum moderator but it needed to be done whether the community liked it or not. Not that being flagged a few times for mod actions would have hurt me, but I know there are much less popular ones out there who are just trying to do their job as a mod rather than abusing their power.

0 Likes

(Christoph Rauch) #5

Users don’t really penalize moderators in slashdot’s system. Users get presented a randomized selection of posts accompanied with one moderation action done. These can be either “up” or “down” and a tag like “flame” or some such.

The meta-moderator can then vote this action as either “fair” or “unfair”. This has influence in the “karma” of the moderator. In slashdot’s system the moderator is not a fixed person, but is elected by the system because of certain properties of the user like: activity, percentage of upvoted vs downvoted posts, etc.

So a lot of “unfair” meta-moderations lowers the probability that this user will become a moderator again.

0 Likes

(Gweebz) #6

I completely agree, I think it should be up to the moderator to choose the penalty (if any) when a user gets an appeal/flagging denied. A minor difference of opinions should not penalize either users. Trolling by frequent flagging and/or appealing should be severaly punishable. All of this needs to be evaluated on a per-situation basis.

0 Likes

(F. Randall Farmer) #7

There are a lot of great comments about some amazing exceptions on this thread.

BTW - I’m Randy Farmer and I’ve been advising the team on this and other issues. Here’s my qualifications:

Especially interesting in this case is the whole of Chapter 10, which you can read fore free here:
http://buildingreputation.com/doku.php?id=chapter_10

Anyway - Q&A is different than forums, so we’ll be adapting and experimenting here - so this feedback is great!

##Something Important##

The most important thing about reputation scores is that they are in context. “Flagging” reputation should be it’s own (internal) score. That is the score that goes up or down based on how accurate you are at flagging content, not your general trust score. As @tszynalski points out significantly modifying your general trust confuses things.

At Yahoo! Answers we learned that people won’t report marginal calls and risk only their flagging reputation, much less if it hurt their overall reputation.

Users definitely were hiding the worst of the worst content. All the content that violated the terms of service was getting hidden (along with quite a bit of the backlog of older items). But not all the content that violated the community guidelines was getting reported. It seemed that users weren’t reporting items that might be considered borderline violations or disputable. For example, answers with no content related to the question, such as chatty messages or jokes, were not being reported. No matter how Ori tweaked the model, that didn’t change.

In hindsight, the situation was easy to understand. The reputation model penalized disputes (in the form of appeals): if a user hid an item but the decision was overturned on appeal, the user would lose more reputation than he’d gained by hiding the item. That was the correct design, but it had the side effect of nurturing risk avoidance in abuse reporters. Another lesson in the difference between the bad (low-quality content) and the ugly (content that violates the rules)-they each require different tools to mitigate.

Discourse will need to track multiple reputations, including “flagger” quality - and this has been shown to work to get rid of the very worst (spam/troll) content. It doesn’t deal with the marginal cases (we’re still debating about how to handle “off-topic”) - thoughts on that based on operational experience are most welcome!

15 Likes

How do you automate trust?
(Jackdoh) #8

I love the “side topic” feature of Discourse. It would be great if the off-topic flag can be set to automatically convert the post into a related topic along with all replies to it. This would eliminate a lot of trolling, grammar police, and all other kinds derails.

2 Likes

(Jason) #9

Do you really want a bunch of new troll, grammer police, etc threads though? I think just hiding one or two posts by having multiple users flag them would work better rather than cluttering up your forum with pointless threads that will then need to be flagged themselves or cleaned up by a moderator.

1 Like

(Jackdoh) #10

Ok maybe obvious spam and trollish post should be hidden. But sometimes there are stuff that are borderline derails, or only interest small minority of the reader, or a really interesting tanggent but shouldn’t be on the main thread. Maybe the author forgot/didn’t know/didn’t care to create it as a new topic, the community then can decide to excise the post and its replies, but not throw them completely away.

1 Like

(Jason) #11

That sounds really good me, especially if it doesn’t require moderator interaction to make the new thread. Maybe have a checkbox when flagging off topic posts?

0 Likes