Sorry one thing at a time, what are the scores for the other sensitivities?
One thing we are going to do now is bring down the user accuracy thing quite a bit.
Sorry one thing at a time, what are the scores for the other sensitivities?
One thing we are going to do now is bring down the user accuracy thing quite a bit.
We just merged in a change (PR), that more accurately calculated a user’s flagging accuracy.
In the past, user accuracy
has always been a positive value, added onto the 1 + trust levels
. This is problematic for users who actually have low accuracy, and need their credibility decreased.
Now, users with high trust levels will not automatically have high impact on whether or not a post is hidden; their historical accuracy will impact their credibility (subtotal) going forward.
With this change, user accuracy
will subtract from a flaggers’ subtotal
if the flagger is inaccurate more than 30% of the time. This will reduce their impact, and a inaccurate flagger will not be able to hide a post single-handedly.
The new change also gives accurate flaggers more credibility, as they continue to flag posts.
I understand the system is trying to be smart but I’m having hard time understanding how many flag a post needs to be hidden at this point, let alone it’s impossible to explain to users about how it works with accuracy, sensitivity and the math.
Does it no longer hide with a single flag at this point?
Also, any ETA on the min flaggers to hide
option? That was the perfect solution for us.
We’ve tweaked the logic a lot based on feedback and most of the time a single flag will not hide a post anymore.
We are not definitely adding back the min flaggers to hide
option. It’s on the table, but only once we’ve exhausted everything else.
What would be helpful to us is if you could provide some examples where someone’s post was hidden when it shouldn’t be, or vice versa. Where is the current system failing you?