Handling trolls with multiple accounts over VPNs

Hey all, we’re seeing an increase in troll users (usually users that have been suspended in the past for abusing other members) over on our Discourse forum. We can usually establish when it’s a troll account based on their email address and their source IP and deal with it, but sometimes it’s not until they make a thread with more abusive content that we see it isn’t a “real” account.

I’ve already tried a few wildcard IP bans based on some DigitalOcean IPs (our trolls tend to use VPNs, most of which originate from a DO instance), but this hasn’t really proved to be effective.

Has anyone got any solutions on how to tackle trolls?

9 Likes

Bummer you’re getting an infestation. There’s recently been discussion in another topic about similar matters, and has a few ideas in there. It might be worth reviewing those ideas to see if any of those might work for you.

8 Likes

VPN’s are really hitting main stream now. Opera Software has just released a free VPN for Android and their PC web browser. These alone offer trolls a bunch of IP’s. Then there is ZenMate, FreeDome and gazillion other VPN providers targeted for main stream success. Nerds can be even more innovative with for example hourly billed VPS’s and SSH tunneling.

This adds to my case of proposing cookie based user tagging, even though it will not be nerd proof.

3 Likes

Thanks, Matt - I had a brief read through that thread earlier.

We have a similar situation to what @ljpp said in the linked thread about our trolls not being genius cyber criminals. I’ve spoken to one of the trolls in the past (tip: reasoning with a troll isn’t worth the time) and he said he just uses a free VPN in his browser - likely one of the many listed above, which usually offer multiple locations and subsequently more IP addresses.

I tried blocking some IP ranges, but after a while, it just becomes a game of cat and mouse and it just seems a waste of time.

Whilst cookie based user tagging may not be the most secure method of preventing trolls, it seems something that’s fairly simple to implement and can eliminate the feeling of “wow, I can’t believe it was this easy to create another account!” that any suspended user will probably go through (we’ve had this happening a lot without the suspended user trolling). Fingerprinting is more advanced which in my eyes can be saved for a later date, but something simple like this should be added for the time being.

We’re committed to keeping our community troll-free, but it would be nice for Discourse to implement some more protection at the source (we’re on a hosted plan) - dealing with insults hurled at moderators and other users isn’t really that motivating for us as you can probably imagine :slight_smile:

10 Likes

Hmm, how does cookie based tagging prevent people from using incognito or anonymous mode in their browser? Do we really think these people are smart enough to use a VPN to evade IP bans, but not smart enough to enter anonymous/incognito mode in their browser?

It would be a tremendous waste of engineering effort to spend a lot of time on a cookie check when all the user needs to do to evade it, is to tick the “anonymous” or “incognito” mode in their browser…

My suggestion is to lock down new account creation – temporarily require approval for new account signups – and vet each new account.

5 Likes

As far as non-cookie based browser fingerprinting goes, here are panopticlick results for my iPhone 6s Plus in anonymous Mobile Safari, compared to my wife’s iPhone 6s:

It’s not looking too great: of 14 criteria, the maximum uniqueness is 1-in-280, and only 4 are more than 1-in-100.

Furthermore the only way to tell my wife’s iPhone apart from mine is screen size. Anyone who is on the same iOS version with the same screen size, would look identical…

1 Like

You trimmed the important bit a couple of (short) paragraphs further up: how unique your browser is overall amongst the ~140k browsers in the panopticlick DB. Those characteristics aren’t, for the most part, strongly correlated (some are, like accept header and user agent, but most are almost completely orthogonal) so you need to add the “bits of identifying information” together to get the total amount of “uniqueness” conveyed by a browser.

At any rate, after some discussion internally, this isn’t something we (CDCK) are going to be able to work on any time soon (many irons in the fire and all that), however a plugin from one or more community members (either written by or sponsored by) would, I’m sure, be appreciated by many. Or at least that subset of site owners plagued by slightly-smarter-than-the-average-bear trolls, anyway.

9 Likes

Is there any way you can forward us particular incidents?

I am curious to see the scale of the issue:

  • Does it happen hourly? daily? Weekly?

  • Are trolls being flagged?

  • Is this the same person, over and over? Is it 20 people?

Mitigation strategies highly depend on the extent of the issue.

4 Likes

Approving first posts is also a great idea, and easier than approving sign ups. This way you are acting on actual data from new users, at the time they post.

9 Likes

Just saying that it adds up to the forensic evidence, while the significance of the IP is eroding due to multiple reasons. VPN is going main stream - I know totally non-tech people who are customers to a commercial VPN service. And outside the super nerd world there are still trolls, with skill level ranging from 0-100.

But interesting concept this Panopticlick. First time I hear about it - gotta have a look.

My understanding is that Panopticlick is a proof of concept and is not kept up to date with the all the latest fingerprinting techniques. It does not cover the html5 canvas method, for example.

https://www.browserleaks.com/canvas

In my personal tests, that test is not a great fingerprinting tool alone. But combined with others, it may be of use. The browserleaks site has some others.

Yes, Panopticlick includes canvas fingerprinting; see the example detailed test result that was posted earlier.

3 Likes

Okay, sorry, you are right. They ran with the same old tests a long time, but have since updated. Triangulating with the wayback machine, the tests were roughly the same from 2010 to November 2015, and then they revamped it in December. I wrote to them in June 2014 suggesting they add the html5 feature and canvas checksum checks, citing browserleaks. They never emailed a response of any kind back.

Any development regarding troll identification?

We have this suspected super annoying individual, who has now made 5 comebacks with new user accounts and email addresses. Our moderators spot him based on his writing style - he always come back during our live game chats, especially when the team is losing, and starts posting negative one-liners. Cleverly he does not immediately break any of our rules, and writes decent Finnish, but manages to troll some of our good members with his negativity.

But we have to ban him based on guesswork - he is using a market leading ISP, which keeps their customers behind a NAT, so IP does not tell us anything nor can we block a range.

1 Like

Would be funny to see somebody implement a ghost ban with an ai bot that then vaguely targets said banned user’s posts with intermittent likes and replies.

Can then hold a private troll contest to see who keeps posting the longest without realizing it.

1 Like

Trolling hasn’t been that big of an issue recently, we introduced more checks (i.e. user most have posts approved if TL0, etc.). which helped us pick out about 80% of the trolls before they got through to posting publicly.

7 Likes

Bumping an old topic here.

We have constant issues with blocked misbehaving users coming back with a new account. We are virtually defenceless agains them, as they are not technically identified. Most of our users are not tech savvy people, so even my proposal of cookie tagging would be an asset.

  • More the 50% of users surf on mobile (global trend, increasing). IPs vary wildly and NAT is used on operator level.
  • 4G LTE based broadband modems are rapidly increasing too, as operators like to sell them rather than cable (cheaper infra maintenance cost). Again, IP changes rapidly.
  • Email service providers allow creation of aliases, so that wont stop them.

To summarize, people love our community but misbehaviour is not uncommon and we have to remove noisy individuals to keep the overall experience clean. Only a couple of active yet poorly behaving users are able to crap in every hot thread of the moment, ruining the flows of conversation very quickly.

I would love to see this as a key developement feature sooner than later. The impact is huge on the overall community experience - much more significant than certain technical or UX improvements (which are cool too, of course). Content is king.

4 Likes

What if we implement a system which slows down verification email sending for during the time of suspected troll attacks.

Having to wait for various amount of time to recieve verification emails can discourage any potential troller.

Lets take an example.
A community has an average new user sign up of about 1/hour. And during a strange hour, the number of users signing up increases to 10+.
The our system gets alerted and slows up the verification email sending.

And for the all users who have regsitered using those time period has to go through manual post approval. Which in turn makes discourse automated and can help reduce a bit of potential spam. Addressing the spam and trollers must be our priority, atleast what I believe as a discourse lover.

@ljpp

AFAIK the Screened Email list and levenshtein setting should work fairly well to stop new registrations that have similar email addresses that are in the Screened list.
But if the problem is bulk registration of “seed” accounts then perhaps some of the same code could be used for some type of registration need approval queue?

I don’t think this would work for a larger community, especially for ours as a game community. For peak periods, like new releases, this solution would just discourage new serious users from taking part in the community (i.e. if they want to post critical feedback or get support, they’d want to do it asap, and the delay would just discourage them). In a way, this potential disruption would just be giving trolls the attention they seek.

From experience, we’ve usually found that troll accounts are created at staggered intervals. We haven’t experienced a “surge” of new accounts attacking our community before, just a few new accounts created over the course of a few weeks and some dormant accounts that spring into life.

9 Likes