How are we all feeling about ChatGPT and other LLMs and how they'll impact forums?

I get the POV, but this gives me mixed feelings. Here, the idea would be that AI input is undesirable because it wouldn’t be “good enough”. I’m not even sure that’s true. AI may already produce better text than quite some humans.

Isn’t the real idea that forums are “for humans”, and machines are undesirable altogether? Yes, it might be seen as some form of discrimination, but we’re talking about machines, not people.


Isn’t the real idea to have a meaningful discussion? If some parts of that discussion are generated by AI, well-written and well-researched, does that make the discussion any less meaningful?

Consider this very forum, for example. If there was an AI helping to solve problems with answers that are as good as, if not better, than the ones we’re writing ourselves, is that a bad thing?

The AI engines might not have created the liquorice badge, or ask for it. :slight_smile:


Me too, on reflection. And I think it will very much depend on the purpose of the forum: a mutual support group is one case, a product support site another.

But, as moderators, we already have to deal with individuals who are a bit wayward, who don’t quite stay on the topic of the thread, or even on-topic for the forum.


Yes, probably. But that might circle back to the question “what is a discussion?”. Can you have a “meaningful discussion” with an AI? It seems some people already spent time “discussing” with ChatGPT.

What if some other AI answer to the first ones? And what if they’re all having a “meaningful discussion” together? Would something where even just the majority of participants are AI still be a “discussion”?

IDK, it seems complex questions and answers may depend on the POV. There is also a difference between having an internal AI injecting some information and starting to have outside AI as participants. To me, the real point of @Ed_S post and above was on the “indistinguishable” part.

If you can’t tell, does it matter?” was one of the first lines of dialogue of the Westworld television series, presented as a throwaway in the first episode of the first season, in response to the question “Are you real?”

Source: philosophy - What are the implications of the statement "If you can't tell, does it matter?" in relation to AI? - Artificial Intelligence Stack Exchange


A separate but sort of related question is whether advertisers will raise a fuss over AI generated traffic.

It isn’t like a bot is gonna buy something, though it might recommend someone buy it (or not.)


TicketMaster / GPU sellers might disagree with you here :wink:


Good points but I’ve been thinking about it this for probably 12 years. I think the call for regulation from some of the industry and creators are duplicitous at best while they flood tech with these clearly irresistible goodies, I don’t buy the sell so let me jump ahead instead of oohh’ing and ahhh’ing at the new sparkley “toys”, because this most certainly is not Christmas and all that shines is not gold.

Anyone with eyes can see now that in a cyberworld where AI can generate text and imagery like humans it means regulators will lunge for a universal digitalID for the entire internet, and suddenly those toys are painted with lead.

1 Like

Around 30 years ago, when email spam and spoofing/phishing were just starting to become annoyances, there was a similar call for a universal ID. It failed then, I suspect it will fail again, because the technology to make it work reliably still doesn’t exist and may never exist.

1 Like

Would you have some source about this?

I don’t remember anything like it at all. My perception is the world “only” started to get crazy post 9/11. Before that, I don’t recall there was a push for more surveillance and identification of everything and everyone using any possible excuse. What you say smells like rationalization, but I’m looking forward to be proven wrong. Even if things were bad before, it isn’t an excuse to make them bad today or tomorrow.

I don’t recall a lot of the details at this point. It was a committee initially put together by people on the mailing list managers email list, with a couple of members who were on some of the ANSI committees and were willing to sponsor it for consideration in the ANSI committee structure. I remember there were two or three of us from the USA, one from Switzerland, one from Germany, a few more, around a dozen in all.

We came up with a proposal for a ‘universal ID and credit’ system that would have charged ISPs something like 0.00001 cent for each email sent. It didn’t get through the ANSI committee structure, the security folks said the ID system wasn’t secure, others said administering the central credit structure would be too expensive and a political football, others said it could be used by governments for tracking private emails, etc.

FWIW, a friend of mine who was a cryptography expert agreed with the consensus that the ID system was insecure, but he was of the opinion that no cryptographic system is secure, given enough computing power.


I mean, you had thousands of what I would describe as weirdos (if this applies to you calm down…I collect Ninja Turtles) when that “relationship AI” lost some of its “personality” and stopped mimicking emotions as such, that were losing their minds all over twitter and reddit that their pesudo-contact has been torpedoed from their lives. And this was after what, two months or so? Make of that what you will.

Although I don’t think most people use forums as a personal friendship simulator, though I may be wrong, and all of these people did run to messageboard replacement tools to complain about it…


Exactly, most of these tools are designed for Basic ChatGPT. Any good prompt engineer can make bypass this very easy.

For example, a Raymond Reddington response (I got ratelimited on the website u provided, but it passed there as well)


yeah, I also agree with you, the thing is chatgpt is dominating the AI market but there are many free tools in the market.


But mostly what forum developers and admins need is API’s and those are rarely, if ever, free: even if you run your own local LLM you have to pay for significantly higher infrastructure costs.


The Wall Street Journal is reporting that the FTC is launching an investigation into whether ChatGPT is harming people by publishing false information about them.

Not sure if this link will work, it might be behind their paywall


ChatGPT has taken over the world. It is everywhere. We cannot escape it.

It is used in everything now.

For example:
Discourse AI

AI generated cartoons also exist (like ai_peter) where you put in topics and an AI writes a script

and much more!

AI may hallucinate answers, making it risky to use, always fact-check. You should be really careful, wrong answers may cause problems

1 Like

Most forum admins have seen plenty of erroneous posts from human posters, will AI-generated ones be more difficult to deal with?


How will the recently announced agreement between the major AI developers to require things like watermarking of AI-generated content affect imbedded LLMs? Can Discourse watermark ChatGPT-generated content?

1 Like

This surely must refer to pictures, video and potentially sounds but not text? How would watermark text without compromising the content?


That’s the challenge, there may not be a practical way to watermark AI-generated text. Which means we’ll all still be wondering “Is this real, or is it AI?”