User Flagging Ratio report scores

Scores in the User Flagging Ratio report seem a bit confusing. :slightly_smiling_face:

For example:

  • Users with only disagreed flags are scored above those with no flags, and above those who have a similar amount of agreed and disagreed.
  • Those with e.g. 100 disagreed and 100 agreed are scored exactly the same as someone with no flags, at 0.

What’s the intention behind this score? (cc: @joffreyjaffeux)


The current formula for each score is:

if disagreed == 0
  agreed * agreed
else
  ((1 - (agreed / disagreed)) * (disagreed - agreed)).to_i

source: app/models/reports/user_flagging_ratio.rb

Here’s an example of scores for 0, 5, 10, and 15 agreed and disagreed flags:

   |   0   5   10   15  agreed
------------------------
0  |   0  25  100  225
   |
5  |   5   0    5   20
   |
10 |  10   2    0    2
   |
15 |  15   6    1    0
   |
disagreed

And that’s how it looks if you throw that formula into Wolfram Alpha. :wink: (x is agreed, and y is disagreed)

edit:

Most recent changes to this algorithm are mentioned here: User_flagging_ratio_count - #8 by joffreyjaffeux

3 Likes

My use case for this particular report is to identify users who are most, or least, aligned with moderation. From that perspective, 15/0 and 0/15 are equally important for different reasons, and a score of 255 / -255 in those cases I think would make the situation much more precise, but I’m not sure how the algorithm could do that while also weighting for flag volume as well, which us equally important.

3 Likes

Wow, that means that keeping the agreed number constant, as the disagreed number increases, the score will be better and better (when disagreed > agreed).

A more reasonable and direct formula may be something like:

(agreed * agreed) - (disagreed * disagreed)

That is, the agreed will always make the score higher, and the disagreed will always make it lower. I don’t know if there is a need to make the formula more complex than that, but anyway, just my opinion (if it’s ok for the score to be negative).

I think this is okay, though. Is there a problem with it, considering that both values (one that should increase and the other that should decrease the score) are the same?

1 Like

Didn’t write most of it, I mostly fixed a bug which was preventing to show records when you had more disagreed than agreed. So not sure about most of the logic. @eviltrout

I did the agreed * agreed as it thought it would be interesting to emphasize users with which we never disagree. But this is edge case and most users won’t be in this bucket, so we should probably optimize the other branch.

Feel free to correct the formula to better handle any specific case this doesn’t take into account.

1 Like

You should check with @eviltrout

1 Like

I am definitely open to suggestion here. What would you suggest in its place?

2 Likes