What about requiring a nominal membership fee to allow posting, like $1?
If all else fails…
I would:
1- Block the domains for the throwaway emails in Discourse settings
Or only accept emails from specific providers by whitelisting them and blocking all others by default
2- Block tor access - for a while
3- If (and that’s a big IF) you still have the original IP address they signed up with - report to their Internet Service Provider.
More on that here:
Key takeaways:
Most activities that violate my user guidelines also tend to violate most ISP’s AUPs (Acceptable Use Policy).
However, if I have a user who evades bans and tries to come back over and over again, then he’s disrupting my community and wasting my time and the time of my staff. So, I’ll report that person to their ISP. If someone comes and makes 273 spam posts, that is a serious disruption of my community (pushing new threads 10 pages down) and I’ll report that person to his ISP
look up the user’s IP address and then I do a search for it on DomainTools to see who owns it. I then visit their website and contact them via their abuse contact. Most ISPs have an abuse contact designated. But, if they don’t, I’ll just use their general support contact.
Sometimes, no action will be taken. But, sometimes, action will be taken. I remember this user who had PM spammed one of my sites. I reported him and I found him, a while later, complaining on his website about how I had reported him to his ISP and how they had suspended him
We have used this at XDA with a lot of success in the past - especially for someone being abusive.
I’m confused, why isn’t community reporting hiding each rogue posting quickly?
Users generally don’t flag that much. They view it as being a “rat”.
Unless the content is incredibly violent / dangerous, or very obvious spam, of course.
I take a robust approach to blocking disposable email addresses - after a bit of searching on the subject compiled a list of 3,700 domains to block. I use SSO so this is not managed within Discourse itself (not sure whether you could add this many records to the Discourse blacklist…).
https://gist.github.com/richp10/2938dbd28300241d444f45eb5d1d364f
Not entirely sure whether this is comprehensive or whether it includes ‘false positives’ - but in principle I am happy to try and prevent registrations from disposable email addresses.
Ways I have successfully increased use of flags on a forum I moderate:
-
don’t describe it as flagging or reporting primarily – call it member moderation and use that term a lot, most active members will have at least a vague desire to be mods themselves;
-
explain that a timely flag is a way to get people back on track early, so they DON’T need to be banned or suspended, and that flagging a post will not automatically trigger a sanction, but that mods will open a conversation with that person and attempt to resolve things;
-
I always thank people for the flag, remind them that they are doing member moderation, and explain, as far as possible without breaching confidentiality, what action/s were taken on their flag, and why. If the flag was incorrect but done in good faith, I mention that false positives are unavoidable and that they happen to me (which is true, suspicions about members that are disproven over time, etc).
So basically invoke people’s natural desire to protect their community, especially its newer or more vulnerable members, and to see themsleves as the sheriff, so to speak, not as the guys ratting someone to the forum cops. I share this because it has increased the number of flags substantially and in a short period of time .
You could also make a tutorial for the Lounge, wording flagging as a service to the community and emphasising the protective and “member moderation” aspect.
A function that’s just sat there, especially if your community is new to Discourse, is different to a function that has been explained by someone they know and trust, and who has encouraged its use (you could include humorous examples specific to your community and funny pictures in your tutorial) so that they feel confident about what will happen, and are aware it’s expected and welcomed.
Giving new signups information on flagging means the new generation will accept “member moderation” as part of their rights and responsibilities, and you will begin to raise a new “generation” with that culture.
I’ve just begun implementing this in individual PM conversations with new members and may add it to the signup message, because so far it’s been well-received when described as a positive way to keep the community safe and focused.
This has existed for quite some time now in @discobot
– try signing up for a new account at https://try.discourse.org and respond to the PM it sends you (and every new user, unless it’s disabled).
That is actually a very good point! The modal actually says “Privately notify staff”, so it should be clear that flagging doesn’t trigger stuff automatically (although we know it does in certain situations). But to see that you need to click the flag in the first place. So your point is absolutely valid.
Also this one: