I do the same here. After writing a reply, I read what I’ve written, change it, re-read it, and more often than not will abandon the reply until a few hours later after I’ve given it more thought. My final reply is never the same as the one I originally would have posted. That in itself helps to “keep the peace.”
Well then thank you Sir I all the more appreciate your reply and recognize you took the time to write, reread and ultimately hit the reply button,
That is really a fantastic outcome, and I’d guess would be the norm if people showed more compassion and concern while casting off what must be self imposed prejudice and misconceptions of others,
I half expect to log in to see another flag and find these posts missing with a note reading
Discussion regarding sincerity, caring, and compassion has been removed. This is not the place for such discussion.
Yes, very similar—our guy is struggling with bipolar disorder along with T1, and being unable to afford or get access to meds is at the center of the whole mess. It’s all kind of ironic and frustrating because obviously insulin (un)affordability and the deadly consequences thereof is a major topic for us, but he’s identified us with the bad guys in his persecution thoughts. Isolating yourself away from the people who are trying to help you is one of things that goes along with his mental disorder, so it’s a potentially deadly combination. We’re hoping the member he has a level of trust with can get through to him.
Sorry, I never meant to derail the topic.
This thread has become surprisingly positive. It makes me feel happy.
Back on topic though, sorry.
Keep us updated whenever you get the chance!
Okay folks just something I recently became aware of that I rhought I would share. Disposable Email accounts site.
Input names from this site’s listing into Email Blacklist to slow a troll’s roll.
Please share if you find similar services.
Not exactly the same, as this is more about spamming than trolling. But we’ve seen a recent trend toward using real human beings to infiltrate sites and build trust levels before posting links to promote their products. Had a recent one who looked that way on the surface so we checked Stop Forum Spam. No hits on the email address, but the IP address came up freckled with alerts. So we deleted “her.” Then she came back in a few days and pm’ed Admin wondering why her account had been nixed.
Thing is, the initial suspicious “tell” was the profile pic she used. It just looked SO much like it came out of a stock photo repository, as these new spammers all seem to do. So I did an image search on it. It came up as the very first item in this gallery of AI-generated profile pics, though there’s another one in that position now:
No way to blacklist someone based on a profile pic, but it’s good to know about this as more spam operations try using real people to get around the detection algorithms. Advice: if the profile pic looks like it came from a J Crew catalog, check to see if it’s a stock image.
True this just helps with the whackamole Troll whom uses a variety of methods like temp email. In our instance at one time he was creating 10 to 20 accounts a week through vpns. We have gotten quite adept at spotting his sock accounts.
Many types of Trolls and Spammers. More methods known the better to combat. However a determined enough user really can’t be kept out of a publically accessed site.
One of the most effective ways to manage this situation is to simply moderate the posts of all new users who do not meet some criteria (number of approved posts + some other metric, for example).
Then, it does not matter how many bogus accounts or proxies the trolls creates, all their posts are moderated. This is basic forum management 101. Of course trolls can be managed.
We have done this (before) by geoip country, region, useragent string, etc; because when you start to fuse data together (various easily measurable data on the user from standard web client information, like the $_SERVER superglobals, for example); you can score this and let the system auto moderate (keep the posts hidden) new users who scores are higher than the threshold you set based on your own “anti troll model”
The downside to this approach is that some people will be moderated who are legit and they will not have the “good feeling” of quickly posting and feeling welcome. This is basic detection theory 101.
However, these challenges can be overcome, to a high degree of confidence, by scoring the user based on a variety of metrics (geoip info, user agent string, IP address, initial pages visited, and other metrics).
Troll problems are easily solved if you follow an approach like this. You cannot have it both way. If you want to be “very open” then you will have trolls and trash. If you are too restrictive, you will have trouble building your community.
Everything is a trade-off and one size does not fit all. What works for Forum A may not be right for Forum B and so forth.
However, trolls can easily be controlled with a small about of “basic instrumentation” in place, scoring new users based your “anti troll algo” and then auto moderating (staging) those users until they are approved based on their behavior.
It’s basic detection theory…
There are thousands of such sites. I’m sure that there are curated lists of them available. This isn’t the best place to track them.
Well on the plus side. Were lucky as we just so far have one such troll. For a time we locked things down as he was very aggressive after being discovered to have 14 sleeper accounts in place.
Our core members in tye community have become an excellent resource in keeping his efforts wasted. We figure eventually he will tire or find a new shiny thing to grab his attention.
We have installed the Fingerprint plugin that also helps to a small degree as well.
That plugin looks promising, for sure. Will take a look at it more closely later this year, for sure.
Discourse Fingerprint comes as a tool to community managers in their combat with internet trolls. It works by computing a unique identifier (a fingerprint) of each registered user, by taking into consideration over 20 browser characteristics such as user agent, screen resolution, timezone, device memory, etc.
Sounds very promising, indeed AND officially supported by the good folks at Discourse. Yea
I think that’s right, which is why methods aimed at containment through some kind of engagement are the right way to go strategically. Basically you want to identify what the emotional payoff is, and find ways to attack that, because that’s what keeps them coming back. One is obviously the perpetual tit-for-tat in the comment thread, The Search for the Perfect Killer Zinger that traps everyone in the flame war spiral. So you ban them, but then the payoff becomes the even more gratifying game of subverting the ban and coming back in through another crack in the edifice.
In our troll’s case the big emotional payoff was that he was all about casting himself as a Revolutionary Outlaw Liberator and Exposer of Injustice and Corruption. Deleting his posts and closing his accounts just confirmed him in that role. It was totally gratifying for him, so he kept sneaking back in, which also made him feel like the guy in the Guy Fawkes mask. But the vulnerability he’d left open was that we had any amount of discussion on his pet topic. Lots of members had pointed this out to him, but to no visible effect because he could keep the thread spinning along, collecting flags (further validation that he was being oppressed), and happily pushing it to the limit until he was banned and his content deleted. Victory! For him. And he’d activate a new sockpuppet, rinse and repeat.
My defense was to reply to one of his posts with a long LONG list of links to discussions dealing with the exact issue he claimed we were suppressing, then lock the discussion and freeze his sockpuppet account without deleting it. That was key: it meant that his claim and our response remained visible but he couldn’t do anything about it. “You show up, we’ll show you’re full of sh**, and leave it open for all to see.” Creating a new sockpuppet was no answer because there was no way he could resist playing the same game and we could ensure it would end the same unsatisfying way.
His vulnerability was that he was making factual claims that we had the ammunition to disprove. He compounded it with one of his in-thread challenges to us: “You’ll see, my post will be deleted within a day!” Well, that was easy to disprove too. We didn’t delete it, we left it there for anyone to read, along with the counter evidence and no further discussion allowed. Not allowing him to respond left him no way to recover his self-image, and sent the message that further attempts would end in the same emotionally unsatisfying dead end.
A lot of the time, of course, it’s really hard to deprive them of whatever their kick is. Few admonitions are less effective than a blanket “Don’t feed the trolls.” But if you can figure out their inner motivation at least in some cases it may be possible to act in ways that deprive them of what trolling does for them. At least in this case there was an angle that seems to have been fairly effective. His efforts did sputter after that and we haven’t seen him in weeks now.
Indeed we did the same. He has 1 frozen account. We just delete his clones. For the most part he has lightened up in his efforts. Now only mainly targetting one member in our forum from Reddit. On that monitir but ignore.