So What Exactly Happens when you "Flag"?


(Adam Gurri) #1

Didn’t want to just go and do it to anyone, but was wondering what the sequence of events was when you flag something. Does something pop up next to the topic link saying that it’s been flagged? Are you notified after you open the topic?

Just curious. Very excited for this.

What happens when you flag a post?
What does a hidden post look like?
What are Flags and how do they work?
Why are usernames so restrictive?
Some Ideas for Spam Control
What about the spam problem?
(Jeff Atwood) #3

What happens when you flag something?

  1. “Enough” trust level 1+ users flag a post as {flag type}. By default this is 3, but it can be overridden as a site setting, or in the future, more of a threshold based on trust levels (e.g. two trust level 3 users is all it takes to hide a post.) Remember that new trust level 0 users don’t have the ability to flag.

  2. The post in question is immediately hidden:

    • the post author sees

      Your post was flagged by the community. Please see your private messages.

    • the community sees

      This post was flagged by the community and is temporarily hidden. View hidden content.

    • staff sees the actual post, as posted, in a dimmed state to indicate it has been hidden for others.

  3. A very friendly private message is sent to the author of the post, describing what happened, in the friendliest imaginable language, and letting them know that for {flag type} a considered edit of any kind is enough to un-hide the post.

Also of note: there is a bit of a “cooling off interval” where a post is hidden but cannot be edited by its author. This is configurable, but is set to 10 minutes after reaching the flag threshold by default. As a weird side note, this can open up very old posts, say 5 years old, to editing that would not normally be editable because they are far beyond the default allowed owner edit interval, which is 6 months.

Now some what ifs.

  • The post author edits, the post is un-hidden and nothing else happens: success! No harm no foul, and notably no formal moderators had to be around for this to work! The users who flagged gains some trust, and the post author gains back any initial trust they lost.

  • The post author does not edit, does not appeal: the post is never un-hidden and that user does not get any of their trust loss back. (if their trust level gets low enough, they lose posting privileges, at the lowest levels of trust, they will gently be turned away at the door of the forum altogether.) If a post stays hidden for 30 days, it is automatically deleted.

  • Moderator manually

    • disagrees with flags The flaggers get a solid trust penalty. The post author should get back what he lost plus a tiny trust increase. Post is unhidden.

    • agrees with flags. The post author loses a LOT of trust. Like… a lot a lot. The flaggers gain a little trust. Post stays hidden.

    • defers flags. Nothing happens. Post stays hidden. Posts hidden for more than 30 days are automatically deleted.

If the same post is hidden a second round of flags, the flags must now be manually handled by a moderator at that point, there is no automatic unhiding through edit possible.

(Note that a repeated set of flags from the same folks over and over on the same user should be discounted. We want a wide range of trusted users to think a person’s post is offensive, not the same people over and over.)

The goal here is for the community to be able to protect itself from the worst users, even without a moderator present. But it works even better with a moderator, as the moderator can accelerate the process by handling the appeals, which amplifies the speed at which trust is gained or lost.

Or you could disable it altogether, and go back to a world where every raised flag must be manually handled by an official mod, if that’s how you roll. But this’ll be on by default.

How can I remove a "flag for off topic"
Moderators cannot flag a message after it has been unhidden after a first flag?
Vicious flagging/editing cycle
Directly removing post does not utilize the Trust System?
3 flags of any kind should auto-hide a post
Any way for a user to find out Discourse site settings?
Duplicate/Should Have Searched flags
Cross-Discourse Quoting
(Christoph Rauch) #5

That is sensible, but what can the system do in case of a rogue moderator? Can users penalize moderators, akin to the meta-moderation framework of Slashdot perhaps?

(Tomasz P. Szynalski) #6

Why should you lose “a lot a lot” of trust just because you made an appeal that was rejected? Some of these may be borderline cases – suppose you write something that is mildly off topic or perhaps provocative bordering on trollish, some users flag it, you appeal to a mod, now the mod has two choices:

  1. Side with the flaggers. Result: The poster gets a huge penalty.
  2. Side with the poster. Result: The flaggers get a hefty penalty.

In reality, no one deserves a big penalty because the case isn’t clear-cut – both sides could argue their case reasonably well. Again, I’m thinking about situations where the mod would say “I can see how you might have thought your post was OK, but sorry, I can’t allow it”.

The penalty should be at the mod’s discretion. It should be possible to disallow a post AND let the poster off with a warning.

(Thomas F. Burdick) #7

[quote=“tszynalski, post:6, topic:275”]Solution:
The penalty should be at the mod’s discretion. It should be possible to disallow a post AND let the poster off with a warning.[/quote]

I think this will be a pretty important use case. As a moderator, you might be happy to allow the flagged post under normal circumstances, but you’re trying to tamp down an incipient flame war. It would be a shame to have to choose between alowing the unedited post vs significantly penalizing the user, if it’s a pretty minor offense.

(Jason) #8

I agree, I know that I would personally be significantly less likely to take action if it is damaging someone’s trust level especially in the case of users new to a site who aren’t too familiar with how it operates.

Being able to report moderators wouldn’t be a bad thing, however I don’t think that actually penalizing them in some way would work for many forums. Particularly for forums that are small to mid size it might be preferable for an admin or higher ranking moderator to take a look at it and decide what to do rather than allowing the community to decide on penalizing the mod. I don’t mean to say I think it’s not a good feature to have available, but if there is a penalty system for moderators then it should be optional in my opinion.

I know that I personally have taken some not so popular actions as a forum moderator but it needed to be done whether the community liked it or not. Not that being flagged a few times for mod actions would have hurt me, but I know there are much less popular ones out there who are just trying to do their job as a mod rather than abusing their power.

(Christoph Rauch) #9

Users don’t really penalize moderators in slashdot’s system. Users get presented a randomized selection of posts accompanied with one moderation action done. These can be either “up” or “down” and a tag like “flame” or some such.

The meta-moderator can then vote this action as either “fair” or “unfair”. This has influence in the “karma” of the moderator. In slashdot’s system the moderator is not a fixed person, but is elected by the system because of certain properties of the user like: activity, percentage of upvoted vs downvoted posts, etc.

So a lot of “unfair” meta-moderations lowers the probability that this user will become a moderator again.

(Gweebz) #10

I completely agree, I think it should be up to the moderator to choose the penalty (if any) when a user gets an appeal/flagging denied. A minor difference of opinions should not penalize either users. Trolling by frequent flagging and/or appealing should be severaly punishable. All of this needs to be evaluated on a per-situation basis.

(F. Randall Farmer) #11

There are a lot of great comments about some amazing exceptions on this thread.

BTW - I’m Randy Farmer and I’ve been advising the team on this and other issues. Here’s my qualifications:

Especially interesting in this case is the whole of Chapter 10, which you can read fore free here:

Anyway - Q&A is different than forums, so we’ll be adapting and experimenting here - so this feedback is great!

##Something Important##

The most important thing about reputation scores is that they are in context. “Flagging” reputation should be it’s own (internal) score. That is the score that goes up or down based on how accurate you are at flagging content, not your general trust score. As @tszynalski points out significantly modifying your general trust confuses things.

At Yahoo! Answers we learned that people won’t report marginal calls and risk only their flagging reputation, much less if it hurt their overall reputation.

Users definitely were hiding the worst of the worst content. All the content that violated the terms of service was getting hidden (along with quite a bit of the backlog of older items). But not all the content that violated the community guidelines was getting reported. It seemed that users weren’t reporting items that might be considered borderline violations or disputable. For example, answers with no content related to the question, such as chatty messages or jokes, were not being reported. No matter how Ori tweaked the model, that didn’t change.

In hindsight, the situation was easy to understand. The reputation model penalized disputes (in the form of appeals): if a user hid an item but the decision was overturned on appeal, the user would lose more reputation than he’d gained by hiding the item. That was the correct design, but it had the side effect of nurturing risk avoidance in abuse reporters. Another lesson in the difference between the bad (low-quality content) and the ugly (content that violates the rules)-they each require different tools to mitigate.

Discourse will need to track multiple reputations, including “flagger” quality - and this has been shown to work to get rid of the very worst (spam/troll) content. It doesn’t deal with the marginal cases (we’re still debating about how to handle “off-topic”) - thoughts on that based on operational experience are most welcome!

How do you automate trust?
(Jackdoh) #12

I love the “side topic” feature of Discourse. It would be great if the off-topic flag can be set to automatically convert the post into a related topic along with all replies to it. This would eliminate a lot of trolling, grammar police, and all other kinds derails.

(Jason) #13

Do you really want a bunch of new troll, grammer police, etc threads though? I think just hiding one or two posts by having multiple users flag them would work better rather than cluttering up your forum with pointless threads that will then need to be flagged themselves or cleaned up by a moderator.

(Jackdoh) #14

Ok maybe obvious spam and trollish post should be hidden. But sometimes there are stuff that are borderline derails, or only interest small minority of the reader, or a really interesting tanggent but shouldn’t be on the main thread. Maybe the author forgot/didn’t know/didn’t care to create it as a new topic, the community then can decide to excise the post and its replies, but not throw them completely away.

(Jason) #15

That sounds really good me, especially if it doesn’t require moderator interaction to make the new thread. Maybe have a checkbox when flagging off topic posts?

(Adam Capriola) #16
  1. Is this suggesting that flagging is meant for non-moderators? Should moderators take more direct action?

  2. I really think mods should get a notification when the user edits their post so they can view and make sure it’s ok. I will flag a post + hide a post and think it’s done with, but then maybe I come back later and see all the user has done is add one of these to their post: :stuck_out_tongue:

    The edited post should be part of the flag workflow (flag threshold hit, mods accept or decline, user edits post, mods then accept or decline the edit, etc.). Allowing a post to be flagged a second time seems like an unnecessarily added step to me; let it take one flag and be sure the issue is resolved.

(Wolftune) #17

I actually really prefer the approach where you don’t need mods to step in. That’s the best part of Discourse’s approach to flagging. But the concern about getting notified about edits is spot on. However, it should be the flaggers who flagged in the first place getting the notification IMO, not the mods getting notified:

I opened this: Notification to flagger(s) when a flagged post is edited?

I disagree about that. That pushes against the learn-by-editing approach. The goal is not just that one post is fixed, the goal is that people learn to avoid the bad patterns overall, and if it takes some people more than one edit to learn what’s acceptable, fine.

(Jeff Atwood) #18

Yes, agreed, this has bugged me for a long time.

The sad reality is that so few posts get flagged to threshold, and then edited, that we have precious little signal to work with.