Right now Discourse has many different interfaces for reviewing things:
Flagged Posts / Topics
Approving Users
Queued Posts (a user typed something too fast or other heuristic)
Akismet Plugin
Each one of the above has a different URL and interface, despite at their core being a similar operation. Additionally, the operations above are restricted to staff, and on some forums there is interest in allowing higher level trust users help out with approvals.
Proposal
The next version of Discourse will introduce the concept of a āReview Queueā, for example at a /review path.
When a user visits the review queue:
It will return all items that particular user can review.
The user will be able to filter by type, for example only flags.
The user will be able to complete the review by choosing an action.
Technically, this means:
We will expose an interface for a Reviewable, which can be used by core discourse as well as plugins.
Reviewables must expose a list of actions a user can take. This gets quite complex for silences/suspensions so will likely require a fair bit of work to abstract nicely.
It should be developer friendly, with simple APIs for plugins to use and appropriate DiscourseEvents for the lifecycle.
Spent all day working on this and have some additional thoughts:
If weāre letting non-staff members review stuff, this could open up the door for ācategory specificā moderators which some people have asked for. For now, Iāve setup the schema such that a Reviewable can be limited to admins, moderators, or a group.
I am converting user approval first, since it seems to be the most simple. I did find a bunch of places in the code where a user was marked as approved without going through a central code path, so it will be a good excuse to centralize all that stuff.
While on it, it would be cool to integrate this plugin into the core.
I think we briefly discussed about this somewhere but it has been deleted since I believe. So I repeat: we practice this with success, even though not with the plugin. We have approve unless trust level set to 1 and we regularly lock people to trust level 0 if we donāt trust them for whatever reason.
Of course most of the time suspending temporarily or for good is what we do. But for some cases forcing the user through moderation is a nice tool to have. Some users have character from the fiction novel āDr Jekyll and Mr Hydeā. Mostly they produce good or harmless content but then they freak out every now and then (typically on weekends) and show their toxic side. If they clearly canāt help it, we can let them keep their account and they can still post their good stuff.
Would be nice to promote this practice with real support even if you already have two ways to achieve it if you know about it.
As mentioned in Email on flagged post or those requiring approval I think we may need control over who can and canāt ārejectā posts, so, for instance, with Trust Level 3 users, we might want to make it so they can only approve posts - and everything else is left in the queue for a trained staff member to review. So any kind of destructive action would only be taken by staff. To make it more user friendly for them - perhaps we could have a ādeferredā action (so āapproveā or ādeferā) and then deferring would remove that item from the approval queue for them.
It would also be good to have a record of who approved what - both in a list somewhere (which I think we have in the logs anyway) and displayed with the actual post (eg: āthis post was approved byā¦ā even if it is mouseover message on a āthis post was approvedā icon).
Iāve made some good progress on the design spike of this feature over the last week. Iām working in this branch (which I force push to frequently) if you want to follow along:
The current design is you have a Reviewable model, which uses Railsā Single Table Inheritance. If you have a thing you want to be added to the review queue, you create a model for it that extends Reviewable. For example a ReviewableUser would represent a User that needs review.
Each Reviewable has a polymorphic association called target. For a ReviewableUser youād set it to be the User you want reviewed:
You can then establish the actions that can be performed on a ReviewableUser:
class ReviewableUser < Reviewable
def build_actions(actions, guardian)
return unless pending?
actions.add(:approve) if guardian.can_approve?(target)
actions.add(:reject) if guardian.can_delete_user?(target)
end
def perform_approve(performed_by)
target.approved = true
target.approved_by ||= approved_by
target.approved_at ||= Time.zone.now
target.save!
PerformResult.new(:success, transition_to: :approved)
end
def perform_reject(performed_by)
destroyer = UserDestroyer.new(performed_by)
destroyer.destroy(target)
PerformResult.new(:success, transition_to: :rejected)
rescue UserDestroyer::PostsExistError
PerformResult.new(:failed)
end
end
A few notes about actions:
They can be built based on the user requesting their list of stuff to review. So itās easy to add logic that says only certain user types (admins? in a group?).
The UI will build buttons for those actions. When selected, the call will delegate to the perform_#{action_id} method to do the thing. The results of operations can be success/failure, and can optionally return a new state for the reviewable. So in the code above, the perform_approve action handler will transition the reviewable to approved when complete.
More Fun Stuff
Reviewable content has its own system for choose who can see it. You can restrict a reviewable to be shown only do admins, to moderators, or to a particular group.
I am building in the ability to āclaimā a reviewable. Claiming topics for moderation currently works via the discourse-assign plugin but itās very awkward and buggy. Most people probably wonāt use it but having the ability to do at a row level will help a lot.
Iāve made some more progress, all of it continues to be in the reviewable branch.
Updates
User Approval has been fully migrated over to the new backend data structure. Both the REST API and User.approve ruby API will continue to work for backwards compatibility, but they are calling Discourse.deprecated so we can identify invalid usage. They will be removed in the future.
I added a log ReviewableHistory to keep track of changes to a reviewable.
I decided that Reviewables should be unique by type and target, enforced at the database level. I donāt think it makes sense to have two ReviewableUser for the same user, for example. It makes more sense to change its state back to pending so it can be approved again. This is handled by the new #needs_review! API for you. It will create a new reviewable, or return an existing one thatās back in the pending state.
Up Next
Iām going to migrate Queued Posts over to the review queue.
In another project (FrankerFaceZ emote approvals), this action is called Escalate.
Escalating an action removes it from the lower-priviledged review queue and adds it to the other review queue, where decisions can only be made by the head of approvals.
(notes:
rejection must be accompanied with a Reason, the dropdown for which is cut off on the right side.
āApprove Donāt Replaceā can be ignored, Discourse already has a better alternative with āAgree andā¦ā. )
Iām not sure of the full implications of doing this but Iām like the sound of it.
Iām constantly wary of being overly reactive or severe when at TL4, and even at TL3, So Iām interested in the ability to highlight that Iāve thought the issue is to curly for me to decide the action.
Flagging as spam is one example.
These situations occur quite often because:
There are cross-cultural issues when Iām on US websites. Although we share the English language and I watch US TV/movies, there are many subtle but significant differences in our interpretation of content: sarcasm; understanding of idioms normally have non-verbal cues; the amount of acceptable teasing and trash talking; humour (correctly spelled) and so on.
I have no relationship with the site owner and moderators in any other milieu ā¦
ā¦ and the potential consequences almost always fall on them and not on me.
Another thing to consider is that when escalating a flagged post (as opposed to a pending user), itās often appropriate to create a topic in the #staff (or #lounge) category to talk about it and obtain two-person approval1 for the action youāre going to take.
Escalateā¦
To Admins
To Staff
Create Lounge Topic
Create Staff Topic
Footnote 1: this is just a fancy way of saying one other staff member saying āyes, looks goodā
Time for another update here: My branch has user approval working quite well. Queued Posts are mostly done, although I have a few tests to write and a little more backwards compatibility to add.
I went back and forth a bunch about backwards compatibility with existing APIs, and decided anything worth doing is worth doing properly, so Iāve made a big effort to maintain compatibility where it seems to be used. This means that for Queued Posts for example, the webhooks will continue to work (although there are newer, preferable webhooks) and the queued-post REST API continues to work with a āmore or lessā identical output. The old Ember interfaces have been trashed.
On one hand, keeping the old REST APIs working feels like some amount of extra work, but on the other it has forced me to identify edge cases that I would have missed. Keeping it backwards compatible has really battle tested the new data structure, and I feel a lot more confident about it.
Once queued posts are done the next big one is flags, which is by far the most complex. I still would like to keep the REST API in that case, but I might not do it if Iām fighting it too much.
Queued Posts ended up having a lot more edge cases than I initially expected, but isnāt that always the case? Theyāre fully done and working in the branch now, so Iāve moved on to flags.
Flags are going to take some time, but while Iām in there Iām improving the underlying data structures to add new features.
For example, one setting we added to help sites that get a lot of flags is the min_flags_staff_visibility. On a forum that receives thousands of flags a day, staff can set that to a value higher than 1, and then only see flags that meet that criteria.
Iāve never been happy with the feature as implemented. Some users are really good at flagging, and if they flag something, it should show up even if nobody else has flagged it.
What Iāve done instead is added a score field to reviewables. When a user flags a post, it is given a score. The Reviewableās score is the sum of the scores associated with the user.
This is a very early screenshot, donāt judge it yet:
The current flag interface shows 200 character excerpts of posts instead of their full content, and you have to click āshow full postā to see the original. Iām wondering if this was the right choice, because if a moderator has to review the content of a post 200 characters is often not enough.
Additionally, Queued Posts (current approval queue) shows the entire contents, so putting both in the same place seems odd since some are full length and others are not.
For my first version I think Iāll try showing the entire post in the queue and see how we like it. I think itāll save the average moderator a fair bit of clicking.
One concern here is that spam posts can often be enormous. I would actually be on the side of keep vertical height somewhat consistent with a click to expand.
I havenāt updated in a couple weeks, but Iāve made some good progress on migrating flags over to the review queue. I have been distracted this week with some family emergencies but those should be sorted out soon.
The refactor here is quite major and involves many changes. I am unsure how other team members will be able to review the PR eventually because it will be giant, but weāll do our best!
I have reached the point in the refactor where I want to implement scoring properly. This means removing a bunch of settings we had for flags (min_flags_staff_visibility, for example) and replace them with score based equivalents. I wanted to jot down my idea for flag scores here and see what people thought before I go too far implementing:
a ReviewableFlaggedPost has a score
The score is the sum of the ReviewableScore records for that flagged post. Each ReviewableScore record represents a flag a user has applied to the post. The ReviewableScore score is calculated as user_flag_score + flag_type_score_bonus + take_action_bonus.
flag_type_score_bonus would be configurable by flag type. For example you could set spam be higher than inappropriate if you desired.
take_action_bonus: a value (0.0 or 5.0) depending on whether a staff member ātook actionā
user_flag_score is calculated per user, and is: 1.0 + trust_level + accuracy_bonus
trust_level is the userās trust level (0.0 - 5.0)
accuracy_bonus is the percentage of the userās previous flags that were agreed with * 5, for a value of (0.0 - 5.0). A minimum of 5 flags is required.
So for example, if a post was flagged by two users. One (u0) is TL1 who has never flagged before, and the other (u1) is TL3 whose flags are agreed with 50% of the time. Both are flags whose flag_type_score_bonus are set to 1.5:
One way is to put all action buttons on top, so the click targets are always stable and you render the post below and it can be giant without impacting UX.
That allows people to quickly scan and review/approve a bunch without moving the mouse.
Probably the best way to do this is to implement the rest of the data model migrations, then run your proposed score algorithm against the entire flag history of a few sites, looking for outliers. People can probably survive for a while with a ānewest unresolved firstā strategy.
I forgot to update but this is the solution I went with. I limit the max height of the post and a scrollbar shows up. I think itās a pretty good solution.
I implemented this feature this morning. agree/disagree/ignore stats are now based on the last 100 flags so the user has a chance to improve.
I wanted to do it in master and the reviewable branch but it was basically 2x the work so itās only in the reviewable branch right now.