How are we all feeling about ChatGPT and other LLMs and how they'll impact forums?

So… how do I feel about all of this stuff?

Obviously, I am excited. We have a brand new box of toys to play with, and these toys are unlike any other toys we had in the past. As with most tech, this can be used for good or evil.

Personally, I feel like I’m in the midst of an AI fog. I used to have a pretty good grasp of where stuff is headed in the next three to four years; however, LLMs and future evolutions of this tech have thrown a spanner in the works. I just don’t know where stuff will be in one year.

When you take a narrow look at how fast stable diffusion has moved in the past few months, it is jaw-dropping. When you look at the scope of progress from GPT 3.5 to 4, it is also quite jaw-dropping.

Even with the new AI fog, I do have some very real concerns:

  • It is 100% clear to me that support forums will be ingested by LLMs and used as training data for helpful chatbots. The upside here is that much of the drudge around support can go away, and you get answers right away. However, the downside is that less new content will be created, which can very quickly cause a death spiral unless it is mitigated.

  • I’m certain that spam and other bad actors are going to lean on this tech, launching attacks far more sophisticated than anything we have seen to date, potentially killing forums if we are not careful.

  • This assistive tech can erode trust. When you read words that I write, you expect the words to come from me. People can use AI to do more than proofread; it can completely re-voice our words. Want to sound like Malcolm Gladwell? No problem, you can do it. What is the level of labeling that responsible forum users ought to expect? People will abandon forums if they no longer trust that they are talking to people or are merely talking to facsimiles.

read this in Malcolm's words

In today’s world, assistive technology has the power to erode trust in ways we may not have anticipated. As consumers of the written word, we have certain expectations - namely, that the words we read are authentic and emanate from their stated source. However, with the advent of AI technology, individuals can manipulate written text in ways that go beyond mere proofreading. In essence, they can completely re-voice a message to sound like anyone they choose. So, if you’ve ever dreamed of sounding like Malcolm Gladwell or any other notable figure, this technology can make it happen with ease. However, this raises questions about responsible labeling for forum users. If people can no longer trust that they are conversing with other people, and not just computer-generated facsimiles, they may very well abandon forums altogether. The level of labeling required to maintain trust in online discourse is a pressing concern that deserves serious consideration.

  • We have seen SEO factory forums in the past; we will see far more elaborate and scary versions of this.
other concerns GPT 4 has that are similar
  • There is a risk that the use of LLMs in forums could create a culture of laziness among users who rely too heavily on the AI technology to provide answers and insights. This could lead to a decline in critical thinking and problem-solving skills among forum users.
  • The use of LLMs in forums raises ethical concerns around data privacy and security. Users may be uncomfortable with the idea of their personal information and interactions being analyzed and processed by an AI system, especially if they are not fully aware of how their data is being used.
  • The integration of LLMs in forums may exacerbate existing biases and inequalities, particularly if the AI technology is trained on a limited dataset that fails to capture the diversity of experiences and perspectives of forum users.
  • The use of LLMs in forums may also result in the homogenization of discourse, as users are more likely to receive standardized responses that are generated by the AI system rather than nuanced and diverse feedback from other human users. This could stifle creativity and originality in forum conversations.
  • There is a risk that the integration of LLMs in forums could lead to the displacement of human moderators and support staff, which could have negative implications for job security and the quality of support provided to users.

Despite these fears and more, I remain both excited and hopeful. There are going to be delightful and extremely useful applications of this tech and we hope to explore that at Discourse.

The fire is out; the best we can do here is be very careful with our approaches and experiments and try to contain the fire. I do hope we can contain the fire. But you know… AI fog…

31 Likes