How are we all feeling about ChatGPT and other LLMs and how they'll impact forums?

Seems very good at it:

there goes several businesses! :sweat_smile:

1 Like

Utterly preposterous imho

Should we reimburse all of humanity for evolving the beautiful languages we have?

But I digress.

1 Like

I don’t disagree with you, but I suspect that many lawsuits are considered utterly preposterous by the defendants but costly nonetheless.

2 Likes

If a human vetted question and answer (for example a Discourse solved topic) has economic value as training data, it doesn’t seem unreasonable to want to get paid for it. There’s a need for the data, so it would be kind of a win-win scenario.

1 Like

There are at least two writing competitions in which the object is to write in the style of some designated author. (Bulwer-Lytton and Hemingway)

But I could see where asking an AI to write a novel in the style of some well-known author might raise some hackles with that author or heirs, a recognizable style could be considered an ‘intellectual property’, or at least some lawyer might be willing to so claim in court.

2 Likes

Has anyone had a lot of buzz from users excited to use Discourse Chatbot within their forums? I have seen all this chatbot stuff and I use ChatGPT, Perplexity, Claude, Bard, etc. every day. But I thought forums were a safe space from all of that. I wrote an article about that yesterday I Think AI Saturation Will Revive this Old Web Platform (web forums)

I’m really curious if forum users are desiring Chatbots and AI when they visit discussion forums powered by Discourse and others. If this is the case, I will really have to revamp my idea of forums and even consider a plugin like this. This seems like a big project, maybe time-consuming even. As always, I appreciate all you guys do. Trying to learn about the demand that produced this so that I’m in the loop as it were.

3 Likes

I’m looking into using it in a technical support forum to help answer easy/repetitive questions quickly when staff is busy and during off hours. I think it will be great in that capacity.

5 Likes

Yes, recently I opened a chat window with Hostinger support. It was AI Chatbot. And the chatbot was so effective it told me about an option for a refund I would have never known about and even sent me to link to the refund policy! lol

It understood what I was asking and didn’t ask me if I already tried 10 basic things. So yes, I can see with support cases, it being useful.

Hopefully, that is then saved to the forums, so others can see or even add to the discussion rather than replace it.

1 Like

Would that also be the case with a knowledgeable support person who had experience using the software they provide support for?

1 Like

Of course not. There is no such thing as perfect option for everybody.

GPTs can evolve. But now those are low level option even doing simple math. 3.5 can’t do even basics reliable right. Hallucination is really big problem when there should be facts right, or even close to right.

Other languages than english are hard. For few massive languages it will work good, but for me, and for everyone who’s speaking minor one and specially if the structure isn’t using prepositions, translations will never be top notch.

GPT will tranlate first to english, and is changing the prompt. Then answer will be translated back from english, and GPT will do other changes and hallucination round. The end product will be far away what was asked and even what GPT was offering in the beginning.

And because training is based on idea where million flies can’t be wrong and quantity is over quality, amount of mis- and disinformation is more than just huge. And even in that situation there will be even more fiction, because of hallucination.

Of course it is not that black and white. I’m using entry level solution. But if there is money to spend one can do own training and playground will change big time.

Still I make a claim: GPT works best when analyzing or doing something very is not too much variations. Or if it can create something ”new” totally fictive stuff. But the wide middle ground where a GPT should offer facts and realible information… not that much.

I’m using GPT3.5 by OpenAI a lot every day as… search with steroids. And I’m not too happy. I have to check, re-check and rewrite a lot, but I don’t deny that GPT is still saving my time when creating bulk text.

4 Likes

There was an interesting study on a version of this question published recently:

https://www.nature.com/articles/s41598-024-61221-0

The consequences of generative AI for online knowledge communities

Generative artifcial intelligence technologies, especially large language models (LLMs) like ChatGPT, are revolutionizing information acquisition and content production across a variety of domains. These technologies have a signifcant potential to impact participation and content production in online knowledge communities. We provide initial evidence of this, analyzing data from Stack Overfow and Reddit developer communities between October 2021 and March 2023, documenting ChatGPT’s infuence on user activity in the former. We observe signifcant declines in both website visits and question volumes at Stack Overfow, particularly around topics where ChatGPT excels. By contrast, activity in Reddit communities shows no evidence of decline, suggesting the importance of social fabric as a bufer against the community-degrading efects of LLMs. Finally, the decline in participation on Stack Overfow is found to be concentrated among newer users, indicating that more junior, less socially embedded users are particularly likely to exit.

6 Likes

That pretty much describes my own behaviour. I still ask and answer questions on Meta - I’ve got a social connection here. But for learning about new programming languages and frameworks I rely on a combination of ChatGPT and online documentation.

Possibly the main thing LLMs have going for them is their availability. I’d prefer to get guidance from human experts, but no one has enough time or patience to answer all of my questions at the drop of a hat.

A big downside of learning through LLMs as opposed to learning on a public forum is that the information that is generated is private. It’s fairly seldom that learning something via an LLM is just a matter of asking it one question and having it return the correct answer. It’s more like ask it a question, try applying the answer, read some documentation to figure out why the answer didn’t work, get back to the LLM with a follow up question… eventually a bit of knowledge is generated.

I don’t think anyone wants to read other people’s chat logs, but possibly technical forums could promote the idea of people posting knowledge that they’ve gleaned from LLMs.

Another obvious downside of learning via LLMs is the loss of social connection, human attention as a motivation for learning, job opportunities, etc. That’s kind of a big deal from my point of view.

10 Likes

Availability is the main reason we’re building a support bot.

4 Likes