This touches on one of the more profound questions about AI, namely does it matter, and it what ways does it matter, if I’m communicating with a normal person, an “augmented” person or an artificial intelligence. Let’s call that the “interlocutor identity problem”.
In some ways humans have always been dealing with the interlocutor identity problem. In philosophy this is sometimes known as the “problem of other minds”. One of the more well-known statements of it is from Descartes’ Evil Demon, the context for his “I think, therefore I am”. The questions of how we will, or how we should, feel about relating to an AI is essentially an extension of the same question.
It arises in an interesting way in the context of forums, because in many ways I think we already accept a high level of opacity in interlocutor identity. All we have to judge our interlocutors by are their words and possibly their self presentation (i.e. avatar, username etc). For better or worse, in many cases we don’t think too much about interlocutor identity, or at least are willing to accept opacity.
This context leads me to some more circumspect conclusions (or questions perhaps) about how this affects forums. For example, let’s consider spam. We don’t like spam for a number of reasons, but let’s focus on two related ones:
- It’s ugly noise that makes a forum less attractive.
- It has a commercial, malicious or other purpose we don’t want to abet.
Let’s assume that the application of an LLM to “spam” leads to a reduction of the first and a more nuanced approach to the second. If it doesn’t do these things then existing spam elimination methods will suffice to catch it. In some ways it’s a misnomer to call this spam, rather it’s more like “sophisticated automated marketing”, or… “Sam” (sorry I couldn’t resist).
However once we get into the Sam territory, we invariably get closer to the same underlying interlocutor identity problem. It is already the case that some folks use their presence on a forum to market in a nuanced, indirect, way. The question is whether we care about the identity of the actor doing it.
I definitely share your concerns about its affect on the labor market. However, for related reasons, I’m not so sure it will lead to a dumber, lazier, uglier or more commercial forum ecosystem. If an LLM does this to a forum that forum will indeed probably not fare so well, but then its erstwhile users will find another forum not affected an LLM, or affected by a more sophisticated one.
Forums facilitate markets of knowledge, interest and human connection. If an LLM succeeds in those markets it will undoubtably affect labor, but I’m not so sure it will slacken demand. Indeed, success entails the opposite.