Since this has bubbled up again:
-
I’m using it, but I’m watching costs like a hawk. My biggest concern is out-of-control token usage, at least until I can run things long enough to gain a gut-feel for what my average token usage should be. The ability to set cost limits is good, but I won’t feel personally comfortable until I know what the normal community usage is, and that just takes time.
-
User trust issues are huge. It doesn’t matter what messaging I prepare or what I say as the site admin—there’s an unshakeable perception that LLMs use user-generated content for training and that any usage by the Discourse system means “selling” user data without “permission.” This is an issue that IME is systemic among many commenters and is impossible to shake, because people “know” what AI companies are “really doing.” Enabling AI-based triage on a forum and saying you’re doing so means potentially facing a floodgate of “I DO NOT CONSENT FOR YOU TO SEND MY DATA TO SOME AI TECHBRO COMPANY FOR THEM TO MAKE MONEY ON MY WORDS!” complaints. Not everyone is concerned about this kind of thing, but the people who are, are pissed off while simultaneously being totally uninterested in discussion. I don’t have a good answer here.
-
I am somewhat uncomfortable at placing the state of my forum’s spam detection on how a dozen different companies’ models happen to be feeling at any given point in time. Let’s be honest here: AI spam detection, AI triage, and all other AI features are basically us saying “Hey, let’s just make this the AI’s problem” and then trying to codify what we want it to do via prompt engineering. It works, but the process is annoyingly non-deterministic. You basically have to just hope that things keep working how they’re working. I do not like this. I do not like this at all, and it gives me anxiety. I like my tools to be properly deterministic. LLMs are the polar mf’ing opposite of deterministic, and we’re pinning some amount forum functionality on whatever the hell OpenAI et al decide to fling our way.
That being said, I’m using both AI antispam and AI forum triage. It’s helpful. But I try to remain cognizant of the fact that these solutions must be continually monitored for efficacy.