Regarding people who answer by copy-pasting from ChatGPT, here’s a curious example of some backlash:
I’m also seeing this in the Forums I frequent the most (which is quite technical in nature). A specific new user has been posting 8 or 9 answers a day, all from ChatGPT.
There’s a lot of valuable information there, but there are also inaccuracies and stuff that is downright misleading (imagine a complicated 7 step list, that I look at step 6 and I know it will never work… but many people don’t).
This user seems to be trying to build up a reputation… we’re considering asking people to always explicitly label ChatGPT content as what it is. A request which we can only hope they will respect, but not really enforce.
This makes me uneasy, I don’t think this is going to take us to a good place. I fear that meaningless conversation will just take over the Internet. We might be forced to live real lives in the real world, I’m not sure we can handle that…
My older son delights in sending me articles and posts showing chatGPT inanities. I keep telling him that chatGPT technology is perhaps at a 2nd or 3rd grade level right now in many areas, like math. To quote one of my professors years ago, it doesn’t know what it doesn’t know.
I haven’t asked it for a recipe to bake a cake yet, I shudder to think what it might get wrong. (I’ve seen too many human-posted recipes online that are totally unworkable.)
I think meaningless conversation may have taken over the Internet when Facebook and Twitter were invented. Those of us old enough to remember USENET when it was still accurate to call it a video game for people who know how to read might draw the line even further in the past.
We’re definitely feeling this at the moment - running a free & open Discourse forum, and seeing a lot of ChatGPT-style spambots slipping past the existing spam protection.
Akismet isn’t keeping pace with them at the moment, as I’ve never seen so many spam accounts get through before. They tend to post innocuous-looking comments that answer a question (or rephrase it, and then answer) with a fairly wordy answer that appears helpful, but is missing some context or nuance from the original question.
I assume that Discourse forums are targeted as a testing ground, for similar reasons to WordPress sites a decade ago: it’s open-source tech that’s in fairly widespread use, so bots that can navigate the signup flow in one forum can probably get access to many others.
We don’t currently see the traditional spambot behaviour (create an account, post something innocuous enough to slip through anti-spam measures, and then edit it within 48hrs to add links) - so I’m not sure what the endgame is here. It’s possible that we’re just deleting those posts too quickly, and they’d be edited into backlink farms later on.
I feel like a mandatory 2FA should filter out most of the bots. Any thoughts?
Otherwise, I would probably get users to fill out a typeform after their account creation and add them to a “verified” group through a webhook on form submission & make the forum read only to everyone who’s not verified.
That being said, now that bots can browse the web and understand button intents, I feel like that’s a band-aid approach.
Everyone can rest easy, the Commerce Department has announced plans to make sure AI has appropriate ‘guardrails’
From the WSJ:
“It is amazing to see what these tools can do even in their relative infancy,” said Alan Davidson, who leads the National Telecommunications and Information Administration, the Commerce Department agency that put out the request for comment. “We know that we need to put some guardrails in place to make sure that they are being used responsibly.”
Yes… if the text were to actually still be written by human participants.
Which it probably won’t, more and more. Hence the problem.
AI may cause all you described to become worse and worse, and ever more difficult to detect.
Authors of books is one thing. Not knowing anymore who you are actually dealing with in online discussions is another. All the clues you are supposed to get from a written text become distorted if it’s not written genuinely by an individual, but using AI.
This is totally niche dependent question. My circles are strongly connected to nutrition and feeding of dogs. I see right away if a text is generated by AI — because AIs are thinking a million flies can’t be wrong.
Same thing is visible when we are talking about hiking, paddling etc.
So, it depends what kind of sect has been written most of text to web. Thats why everybody else than coder in my friends/community count AI just another hype — at the moment anyway
I agree in principle, but the criteria for identifying machine generated text is a moving target. The good news is that Chrome extension hasn’t generated any False Positive yet, albeit in very limited testing. The bad news is extra work for moderators to make the determination, and the need to keep up with the most current detection tools as AI generated text gets better and better. AI text will eventually reach a point where detection is problematic.
Indeed, but I’m thinking of it as a moderation problem. Our human mods already keep an eye out for posts which are confrontational, rude, insulting, repetitive or misleading. This is an extension of that: and with a local cultural norm which says real people don’t post machine text without a warning notice, anything which looks like (or smells like) machine text, especially from a relatively new user, is automatically suspicious.
I think human brains, especially moderator brains, are going to get reasonably good at spotting machine text. When the machine text is indistinguishable from a useful post which moves the conversation forward, we no longer have a problem. (People who make low-effort and low-value posts will perhaps be penalised.)