How are we all feeling about ChatGPT and other LLMs and how they'll impact forums?

As I follow the news coming out about ChatGPT, GPT-4, and other LLMs, and have been playing with the Discourse Chatbot 🤖 (supporting ChatGPT), I feel excited, terrified, and unsure of what it will mean for online communication.

I wonder how it will impact forums and text-based communication in general. I also worry how it will eventually impact voice-based communication, too.

Will it get to the point where most users are creating posts with ChatGPT and copy-pasting them into forums? If so, will people try to copy-paste forum posts into ChatGPT to summarize the AI-written post? Will we care if someone wrote it “organically” or used the help of AI? Will there be as big of a benefit of forums and websites in general if people can get similar answers from ChatGPT? How will forums differentiate themselves and stay relatively relevant?

I feel quite unsure of what would come and would just love to hear what others are feeling and thinking right now about it :slight_smile:


Having just had ChatGPT confidently return some very wrong answers about my local mountain bike trails, I’m going to go out on a limb and say that I don’t expect LLMs to have a huge impact on online discussions. Its answers will get better over time, but it will never have the human motivation to avoid being wrong on the internet. Also, as a language model, it’s “not capable of physical actions such as going for a mountain bike ride.” So even if it could give me accurate information about the trails, it’s missing the human dimension that would come with having a local mountain biking forum.


So… how do I feel about all of this stuff?

Obviously, I am excited. We have a brand new box of toys to play with, and these toys are unlike any other toys we had in the past. As with most tech, this can be used for good or evil.

Personally, I feel like I’m in the midst of an AI fog. I used to have a pretty good grasp of where stuff is headed in the next three to four years; however, LLMs and future evolutions of this tech have thrown a spanner in the works. I just don’t know where stuff will be in one year.

When you take a narrow look at how fast stable diffusion has moved in the past few months, it is jaw-dropping. When you look at the scope of progress from GPT 3.5 to 4, it is also quite jaw-dropping.

Even with the new AI fog, I do have some very real concerns:

  • It is 100% clear to me that support forums will be ingested by LLMs and used as training data for helpful chatbots. The upside here is that much of the drudge around support can go away, and you get answers right away. However, the downside is that less new content will be created, which can very quickly cause a death spiral unless it is mitigated.

  • I’m certain that spam and other bad actors are going to lean on this tech, launching attacks far more sophisticated than anything we have seen to date, potentially killing forums if we are not careful.

  • This assistive tech can erode trust. When you read words that I write, you expect the words to come from me. People can use AI to do more than proofread; it can completely re-voice our words. Want to sound like Malcolm Gladwell? No problem, you can do it. What is the level of labeling that responsible forum users ought to expect? People will abandon forums if they no longer trust that they are talking to people or are merely talking to facsimiles.

read this in Malcolm's words

In today’s world, assistive technology has the power to erode trust in ways we may not have anticipated. As consumers of the written word, we have certain expectations - namely, that the words we read are authentic and emanate from their stated source. However, with the advent of AI technology, individuals can manipulate written text in ways that go beyond mere proofreading. In essence, they can completely re-voice a message to sound like anyone they choose. So, if you’ve ever dreamed of sounding like Malcolm Gladwell or any other notable figure, this technology can make it happen with ease. However, this raises questions about responsible labeling for forum users. If people can no longer trust that they are conversing with other people, and not just computer-generated facsimiles, they may very well abandon forums altogether. The level of labeling required to maintain trust in online discourse is a pressing concern that deserves serious consideration.

  • We have seen SEO factory forums in the past; we will see far more elaborate and scary versions of this.
other concerns GPT 4 has that are similar
  • There is a risk that the use of LLMs in forums could create a culture of laziness among users who rely too heavily on the AI technology to provide answers and insights. This could lead to a decline in critical thinking and problem-solving skills among forum users.
  • The use of LLMs in forums raises ethical concerns around data privacy and security. Users may be uncomfortable with the idea of their personal information and interactions being analyzed and processed by an AI system, especially if they are not fully aware of how their data is being used.
  • The integration of LLMs in forums may exacerbate existing biases and inequalities, particularly if the AI technology is trained on a limited dataset that fails to capture the diversity of experiences and perspectives of forum users.
  • The use of LLMs in forums may also result in the homogenization of discourse, as users are more likely to receive standardized responses that are generated by the AI system rather than nuanced and diverse feedback from other human users. This could stifle creativity and originality in forum conversations.
  • There is a risk that the integration of LLMs in forums could lead to the displacement of human moderators and support staff, which could have negative implications for job security and the quality of support provided to users.

Despite these fears and more, I remain both excited and hopeful. There are going to be delightful and extremely useful applications of this tech and we hope to explore that at Discourse.

The fire is out; the best we can do here is be very careful with our approaches and experiments and try to contain the fire. I do hope we can contain the fire. But you know… AI fog…


This touches on one of the more profound questions about AI, namely does it matter, and it what ways does it matter, if I’m communicating with a normal person, an “augmented” person or an artificial intelligence. Let’s call that the “interlocutor identity problem”.

In some ways humans have always been dealing with the interlocutor identity problem. In philosophy this is sometimes known as the “problem of other minds”. One of the more well-known statements of it is from Descartes’ Evil Demon, the context for his “I think, therefore I am”. The questions of how we will, or how we should, feel about relating to an AI is essentially an extension of the same question.

It arises in an interesting way in the context of forums, because in many ways I think we already accept a high level of opacity in interlocutor identity. All we have to judge our interlocutors by are their words and possibly their self presentation (i.e. avatar, username etc). For better or worse, in many cases we don’t think too much about interlocutor identity, or at least are willing to accept opacity.

This context leads me to some more circumspect conclusions (or questions perhaps) about how this affects forums. For example, let’s consider spam. We don’t like spam for a number of reasons, but let’s focus on two related ones:

  1. It’s ugly noise that makes a forum less attractive.
  2. It has a commercial, malicious or other purpose we don’t want to abet.

Let’s assume that the application of an LLM to “spam” leads to a reduction of the first and a more nuanced approach to the second. If it doesn’t do these things then existing spam elimination methods will suffice to catch it. In some ways it’s a misnomer to call this spam, rather it’s more like “sophisticated automated marketing”, or… “Sam” (sorry I couldn’t resist).

However once we get into the Sam territory, we invariably get closer to the same underlying interlocutor identity problem. It is already the case that some folks use their presence on a forum to market in a nuanced, indirect, way. The question is whether we care about the identity of the actor doing it.

I definitely share your concerns about its affect on the labor market. However, for related reasons, I’m not so sure it will lead to a dumber, lazier, uglier or more commercial forum ecosystem. If an LLM does this to a forum that forum will indeed probably not fare so well, but then its erstwhile users will find another forum not affected an LLM, or affected by a more sophisticated one.

Forums facilitate markets of knowledge, interest and human connection. If an LLM succeeds in those markets it will undoubtably affect labor, but I’m not so sure it will slacken demand. Indeed, success entails the opposite.


AI sucks.
I don’t see any benefit.
It will reduce my appetite to participate in anything.


Is this perhaps the most important factor? Every post is tied to a unique identity. When I see a post from you or the (assumed to be) real Sam, I make an estimate of how true that information is likely to be. I’ve heard Sam in a podcast, and spoken with you in the experts video call last year, so I can assume there is at least a real person somewhere behind these posts.

But is the real person even important? If there is an account named “Hal Mcdave” that provides correct information in a consistent and coherent style, then I will also begin to trust and utilitise that information. So isnt that ok?

Surely the real problem has never been artificial intelligence; rather artificial stupidity.


No. Take a short tour among forums and look what the content really is.

AIs will change CMS-world. And heavily it will hit in wordpress-style world. And there to this spam’ish already-copy-paste content like list of year XXXX, how to do…, etc.

This circle here is part of that audience who will use and see strongly use of ChatGPT etc. But the big majority of the rest of the world… not so much. They don’t write any more, don’t know how to copy&paste and don’t use any of those fancy tools what for example Discourse offers.

You guys are pink unicorns :wink: Rare ones. For you every AI-based is a new playground or a toy because it is part of your professional and hobbies.

For CDCK this is matter of near future because of major class corporate clients. They are quite often using Discourse for docs, helps and support. For them such support functions don’t make revenue and needs paid human power. They will be in seventh heaven when there is AI who answers to customers and they don’t need support crew anymore. And devs are as such happy because, as we all know, tech just doesn’t work without never-ending debugging and fixing.

So, AI-based solutions don’t eat trust or something else, because there isn’t such trust at the moment either. Ordinary users can’t see or even care if answers and comments comes from buch of code. They are totally happy for new meme-creators and tiktok-filters :wink: For god sake, they are already paying to dating-apps where most of counterparts aren’t humans…

Sorry guys, but you are now discussing in tight box :grin:


But instead of automatic content creating, that supports only owners of platform, I would like to see AI-based on-the-fly translator that actually works — because those we have now just doesn’t work.

That would help users create anykind text-based, and by speech too of course, so much that forums, chats, what ever would reach real global audience. That is one tool I’m waiting for.

Or similar. Sam played with one thought in some topic to use AI putting topics in right category. To get really helpful per user tagging based on reading/searching history would be something. Such things.

So I don’t see AI as real threat against creation and conversation. I see AI ”just” as a tool to minimizing need to do boring stuff :wink:

Again. Out there is already low level and un-quality content worth of ziljons page loads. If AI will change that I don’t see it as a threat. And if AI-solutions creates as bad shait… well, it will die as wannabe-automatic-support-chats (that are just fancy skins for emailbased form).


Very good questions. @merefield has recently released an awesome plugin that uses Chatgpt to create Topic summaries.

Google has released there flavor called Bard.


I got the same but I hope that humans want to continue doing human thinks like discuss (not only code, copy and paste data).

IA tools could be super useful for operative and repetitive tasks and I don’t think that bots could actually really engage into humans conversations.

That’s fundamentally not possible and 99.99% of the data on the world won’t change that because freedom, choice and life.

Life is change and AI can’t change by their own, not fundamentally (beyond Skynet, of course).

The majority people out there will continue searching on Google and clicking only at the first link.

But there will be always a 3% of the world that choice to jump into the first 5/10 pages or uses Duckduckgo instead for surf and really navigate into internet like old days.

AI is super-bad in Social Media because control, censorship and massive manipulation but not so much on forums instances.

Someday a very few people on the world will know everything about all the people but that will includes our possibility to change?

I don’t think so (and the same applies to quantum technology that will break everything all at once).


Years ago one of the tech magazines connected two Eliza-like program, one was designed to talk like a doctor, the other like a person with paranoia. The results were funny, but these were not sophisticated engines like GPT.

If two GPT instances start chatting with each other about sports teams, chess or baking (the areas I currently run discussion forums on), will it be interesting reading? I kinda doubt it.


That feels like an honest answer to the OP’s question. We sure live in interesting times :slight_smile:
My hope is that forums can used to help us hang on to the sense of what it means to be human.


The term “Artificial Intelligence” can legitimately convey fear and AIs often have been the cause of havoc in books, movies, etc.

I wonder if we’d feel differently if all this technology that goes viral nowadays (midjourney, chatGPT…) didn’t use the term “AI” at all but something else we’re less culturally biased toward.


I’d like to get an AI-veterinary chatbot for automatic initial consultation, as well as information intake to build a detailed support request topic for human veterinary doctors.


I would not be surprised if something like that for veterinarians is already on the market, some of the medical practices are starting to use them to cut staff time.


How I feel, with a little help by midjourney and chatgpt… :upside_down_face:


I guess I can’t really speak on behalf of others. But from my experience, I’ve mainly used Discourse forums (& Discord chat) for interacting with people on certain shared interests in products that are just growing, where I’m not sure it would be very useful to chat with an AI e.g.

  • Auxy app forum (music app, now closed but they’re on Discord) where people shared feedback on each other’s WIP tracks and gave tips specific to the app. There exists no prior written material on the app besides what the community writes.
  • Hopscotch forum (drag-and-drop coding app, where I work and helped to seed when it started). Again there was no written material & detailed guides on the app anywhere on the internet, before the community wrote all those posts, so any AI would be regurgitating what we wrote.
  • Sky: Children of the Light (game, community on discord but the premise is still the same) Again, I played it during beta before there was any written material at all. And I helped to WRITE the musical content on the wiki and have been with the community since before all that content was generated.

in all cases, the AI also hasn’t ‘used the apps’ itself.

So I don’t know that AI would have been helpful, except maybe to nicely word all the stuff that the community has generated, because there was no written content at all on those topics before all those people wrote it through discussions.

(I think Marx in Das Capital, after observing factory workers in England, said something like people don’t use machines; machines use people)

Not that this AI isn’t cool or useful, it’s just that it probably wouldn’t have been very helpful if it was present while I’ve been in those communities.


I run a forum that offers support for entrepreneurs.

Since November, I’ve created @Robolivier, a friendly AI robot that helps users when they mention him. He also replies to every new topics once in hopes of providing the user with a satisfactory answer. I’ve added a disclaimer to each of his messages.

So far, the users have been finding it amusing and useful most of the time.

I plan to train him on my own interactions on the forum, we’ll see how that will go.

Since it’s a paid forum, bots are not an issue for me. But I can see how an open & free forum could get spammed into irrelevance.


A crucial difference is that when AI can mass generate quality information under an arbitrarily large number of distinct “identities” (labels), it becomes cheap to—at critical/profitable moments—violate the trust that has been erroneously built up around those identities. As a human, I don’t want to violate trust I’ve built with others, not only because of my conscience, but because of the damaging effect to my reputation. But an identity artificially tied to an AI is vastly more disposable and abuseable for both capitalistic greed and political machinations.

So how am I feeling? Afraid of the future. This tech is highly likely to wreck us socially in the near future (years), and eventually (decades) destroy us physically if we don’t figure out how to stop it.


On one of the forums I run we have someone who we have suspected for years was posting under at least two different names with different personas, one more confrontational than the other. Recently he slipped up and signed a post from one email address with his name from the other, and then signed another post with a third name. So now we know for sure!

There’s a very interesting book where they run famous writers through statistical analysis of word usage patterns. It gets interesting when books are written by multiple authors, many of the later Tom Clancy books were largely written by the other writers.

This type of statistical analysis was how it was determined that J K Rowling was also writing under a different name, before she confirmed the fact. It was also used to analyze the Federalist Papers, concluding in 1962 that, as many had claimed, Alexander Hamilton wrote quite a few of them, including several that others had claimed authorship of. (They were published anonymously at the time because it was thought that knowing who wrote them might diminish their political effectiveness.)

So maybe there’s some value in have chatGPT-like interfaces in our forums if we can use it to analyze the writing styles of our participants.