It really depends right? In the example above, I would be upset at the poster if the answer had clear incorrectness - especially incorrectness that would lead readers to greater issues. I’m ok with people posting AI responses if they’ve also taken the time to validate and verify the appropriateness and correctness of the information. Even better if they agree the generated content is similar to, or better than, how they would share this information anyway.
If, on the other hand, you’ve just thrown something into an LLM, copy-pasted the answer, and expect me as a reader of your topic to figure out if that content is correct, then I’m not going to be happy. Not only are you wasting my time by presenting false information as correct, but you’re also reducing the trustworthiness of the community in general - or at least your own future responses.
Looking forward, I suspect many good intending people will move away from using LLMs in forums over the next few years, and those remaining will be bad actors, or people who just don’t get forums in the first place. It’s purely anecdotal, but I’ve used LLMs daily for about 2 years now, and I’m moving more and more towards self generated content. Yesterday I wanted to write a blog post that I wasn’t super invested in. I decided to try gpt, but it no matter what I did, it always sounded like gpt and I hated that. So I tried Claude, and despite working with it for an hour, the content never sounded like something written by a real living person. So after about 3 hours total, I tore everything up and wrote it myself in about an hour.
The point of that story is to say - at least for me - self generated content is showing itself to be more and more rewarding no matter how good the models get. I really hope other people using LLMs also get to that stage in the next 2-3 years. If that happens, AI responses in forums will only be from bad actors, or people with good intentions who don’t know any better. Either way that situation makes the content much easier to handle: you can ban the bad actors, educate those with good intentions, and have a moderation policy of “ai content is banned, except if you’ve clearly digested the content you are posting AND fully agree that its contents represents your opinion and understanding about the matter AND the content expresses your opinion in an equal or more digestible manner than you would generate yourself.” or something along those lines.
p.s. I really like this perspective:
I didn’t immediately agree with it, but after a bit of thought I can totally see this being the case.