I get the POV, but this gives me mixed feelings. Here, the idea would be that AI input is undesirable because it wouldnât be âgood enoughâ. Iâm not even sure thatâs true. AI may already produce better text than quite some humans.
Isnât the real idea that forums are âfor humansâ, and machines are undesirable altogether? Yes, it might be seen as some form of discrimination, but weâre talking about machines, not people.
Isnât the real idea to have a meaningful discussion? If some parts of that discussion are generated by AI, well-written and well-researched, does that make the discussion any less meaningful?
Consider this very forum, for example. If there was an AI helping to solve problems with answers that are as good as, if not better, than the ones weâre writing ourselves, is that a bad thing?
The AI engines might not have created the liquorice badge, or ask for it.
Me too, on reflection. And I think it will very much depend on the purpose of the forum: a mutual support group is one case, a product support site another.
But, as moderators, we already have to deal with individuals who are a bit wayward, who donât quite stay on the topic of the thread, or even on-topic for the forum.
Yes, probably. But that might circle back to the question âwhat is a discussion?â. Can you have a âmeaningful discussionâ with an AI? It seems some people already spent time âdiscussingâ with ChatGPT.
What if some other AI answer to the first ones? And what if theyâre all having a âmeaningful discussionâ together? Would something where even just the majority of participants are AI still be a âdiscussionâ?
IDK, it seems complex questions and answers may depend on the POV. There is also a difference between having an internal AI injecting some information and starting to have outside AI as participants. To me, the real point of @Ed_S post and above was on the âindistinguishableâ part.
Good points but Iâve been thinking about it this for probably 12 years. I think the call for regulation from some of the industry and creators are duplicitous at best while they flood tech with these clearly irresistible goodies, I donât buy the sell so let me jump ahead instead of oohhâing and ahhhâing at the new sparkley âtoysâ, because this most certainly is not Christmas and all that shines is not gold.
Anyone with eyes can see now that in a cyberworld where AI can generate text and imagery like humans it means regulators will lunge for a universal digitalID for the entire internet, and suddenly those toys are painted with lead.
Around 30 years ago, when email spam and spoofing/phishing were just starting to become annoyances, there was a similar call for a universal ID. It failed then, I suspect it will fail again, because the technology to make it work reliably still doesnât exist and may never exist.
I donât remember anything like it at all. My perception is the world âonlyâ started to get crazy post 9/11. Before that, I donât recall there was a push for more surveillance and identification of everything and everyone using any possible excuse. What you say smells like rationalization, but Iâm looking forward to be proven wrong. Even if things were bad before, it isnât an excuse to make them bad today or tomorrow.
I donât recall a lot of the details at this point. It was a committee initially put together by people on the mailing list managers email list, with a couple of members who were on some of the ANSI committees and were willing to sponsor it for consideration in the ANSI committee structure. I remember there were two or three of us from the USA, one from Switzerland, one from Germany, a few more, around a dozen in all.
We came up with a proposal for a âuniversal ID and creditâ system that would have charged ISPs something like 0.00001 cent for each email sent. It didnât get through the ANSI committee structure, the security folks said the ID system wasnât secure, others said administering the central credit structure would be too expensive and a political football, others said it could be used by governments for tracking private emails, etc.
FWIW, a friend of mine who was a cryptography expert agreed with the consensus that the ID system was insecure, but he was of the opinion that no cryptographic system is secure, given enough computing power.
I mean, you had thousands of what I would describe as weirdos (if this applies to you calm downâŚI collect Ninja Turtles) when that ârelationship AIâ lost some of its âpersonalityâ and stopped mimicking emotions as such, that were losing their minds all over twitter and reddit that their pesudo-contact has been torpedoed from their lives. And this was after what, two months or so? Make of that what you will.
Although I donât think most people use forums as a personal friendship simulator, though I may be wrong, and all of these people did run to messageboard replacement tools to complain about itâŚ
But mostly what forum developers and admins need is APIâs and those are rarely, if ever, free: even if you run your own local LLM you have to pay for significantly higher infrastructure costs.
The Wall Street Journal is reporting that the FTC is launching an investigation into whether ChatGPT is harming people by publishing false information about them.
Not sure if this link will work, it might be behind their paywall
How will the recently announced agreement between the major AI developers to require things like watermarking of AI-generated content affect imbedded LLMs? Can Discourse watermark ChatGPT-generated content?
Thatâs the challenge, there may not be a practical way to watermark AI-generated text. Which means weâll all still be wondering âIs this real, or is it AI?â