是什么阻止你尝试 Discourse AI?

The first reason was actually mainly just that I didn’t really know what features Discourse AI can do. I’ve seen some of it around Meta but didn’t really know otherwise.

But this is answered for me now with this page :+1:

If I hadn’t seen that though, I wouldn’t know what it could do.

It might be the case for a lot of other people as well, but I don’t know.

7 个赞

Just the mere mention of “AI” turns off most people in our community, and polarizes the rest. For our case, paying money to big business bringing big data to the next level with crazy levels of energy consumption and global surveillance is… against all our values.

Open source and self-hosted could mitigate this but then again… The main features we would use it for is better search and better suggestions. And do we really need to rely on “AI” for that? Aren’t there improvements in content databases and search & suggest algorithms that work locally for the same cost and energy consumption, without becoming an active player of the big AI game?

4 个赞

I might use AI if it were solely used as built-in “proof reader” tool for example. Or maybe even translation tool

1 个赞

Nothing is stopping me from using Discourse AI; indeed, I rather think it is now one of the most fantastic parts of the entire application.

However, as I work as an educator and AI researcher/engineer, I would like to offer my humble perspective on this matter as I work closely with people outside of this proverbial “AI bubble.” In my experience, people are hesitant to use/implement AI for three main reasons:

  1. They can’t manage the notorious LLM hallucinations;
  2. They simply can’t find an adequate use-case for it in order to justify getting involved with it;
  3. They generally have a misunderstanding of its capabilities and get disappointed.

In reality, all of these dilemmas can be mitigated with decent prompt engineering coupled with education on the subject so it isn’t a flaw on the part of Discourse whatsoever. Perhaps, however, if the company made an “AI for Dummies” crash course specific to the Discourse AI plugin, then perhaps the rate of adoption and comfort would both grow. AI has a lot of buzz now so many people want to use it even if they don’t know how or what for.

Nonetheless, my intuition tells me that this set of features will be driving Discourse’s popularity in the near future as it is simply ingenious and so finely integrated into the system. Truly nice work!

6 个赞

I find this quite a strange phrasing - and I note it not as a criticism, but because it echos the thread title: “What is stopping you from trying out Discourse AI?”

In both cases, it feels like there’s a presumption that AI is obviously a good and valuable thing, and somehow someone is not seeing that.

But I see it differently: today’s “AI” is a bubble of fascinating technologies, very appealing to people who like the newest things for their own sake, and possibly of value here and there - but in no way instrinically valuable in their own right.

The question has to be about what the benefits are - what will make things better, in this case better for forum members or perhaps forum admins.

Which brings me to

Or to put it another way, the sheer unreliability of today’s offerings. For all that they sometimes help, in summarising or suggesting, they all too often go wrong. An unreliable summary? An unreliable sentiment analysis? That can only make for a worse experience!

I have a little sympathy for someone who is employed to push AI, or employed to integrate it into a product - that’s the nature of employment, sometimes the roles are themselves misguided.

But I really don’t see the presumption that this is a great thing which will help all forums. Some forums, maybe. Tell us where the value is supposed to be, and include the costs of that value being delivered unreliably.

3 个赞

My point is that AI can improve very many things and in my experience some of my clients want to enhance their business or operations in some way but aren’t sure how to precisely do that. I do not at all think everyone should try to use AI, as that is merely falling into the current hype trap.

Definitely not inherently valuable, which I didn’t imply. As with most technologies, it can be overused and underdeliver, or just outright abused.

I’ve yet to experience this “sheer unreliability.” Perhaps it depends on what you’re trying to achieve? Very general and complex tasks still can lead to unreliable results, that’s true. Again, adequate know-how on prompt engineering and applications of this technology are a must in order to mitigate this — so my point remains.

I appreciate your sympathy, however, I don’t require it :slightly_smiling_face:. I am self-employed, with no need to try to “push” AI for people who don’t need it, and it is indeed quite presumptuous of you to presume that someone or their work is misguided simply as a result of a mostly unfounded opinion of a topic that seems to be foreign to you based upon how to write about it.

I’ve developed AI systems which automatically detect and segment brain tumors and provide treatment risk analyses. I don’t think patients who benefit from such advancements would agree that such a role is “misguided.” Some of us research AI for the benefit of humanity and not just to make a quick buck.

Again, I didn’t presume that it would help all forums and I rather believe that it may be foolish for admins to adopt it “just because” without any domain knowledge and end-goal in mind.

Side note:
I do humbly encourage you be a bit less discriminatory and a bit more informed in your future responses on the matter of AI, or at least don’t take someone else’s responses out of context then accuse them of being misguided and try to offer your sympathies :wink:

EDIT:
Forgot to add that Discourse adding more options for customizations (for example: the ability to properly prompt engineer the topic summarizer) would mitigate many risks. They’re already going this direction via personas and the ability to choose which models to use, etc.

1 个赞

Apologies, I’ve been stewing about this thread since it started, and happened to use your contribution as my stepping-off point. I mean nothing specific or personal in my observations.

For me, to use AI in discourse, as @Saif is interested to know, I would need to know the value proposition. At present my forum is working well, and the costs are under control. To add something, which has a monetary cost, I would need to know what the benefit is. It’s not that anything is stopping me from trying out AI.

4 个赞

I disable Akismet and I’m tasting Discourse AI Spam and Persona internally my LLM is Google Gemini Flash 2.0

My question is, It will learn with interations and deep learn with human feedback? because there option to cache datas only pro version is allowed it

1 个赞

It will not learn anything, but you can supply custom instructions that are specific to your forum if you notice it is making mistakes.

2 个赞

Any chance to lock mention from persona to a specific topic?

Since this has bubbled up again:

  1. I’m using it, but I’m watching costs like a hawk. My biggest concern is out-of-control token usage, at least until I can run things long enough to gain a gut-feel for what my average token usage should be. The ability to set cost limits is good, but I won’t feel personally comfortable until I know what the normal community usage is, and that just takes time.

  2. User trust issues are huge. It doesn’t matter what messaging I prepare or what I say as the site admin—there’s an unshakeable perception that LLMs use user-generated content for training and that any usage by the Discourse system means “selling” user data without “permission.” This is an issue that IME is systemic among many commenters and is impossible to shake, because people “know” what AI companies are “really doing.” Enabling AI-based triage on a forum and saying you’re doing so means potentially facing a floodgate of “I DO NOT CONSENT FOR YOU TO SEND MY DATA TO SOME AI TECHBRO COMPANY FOR THEM TO MAKE MONEY ON MY WORDS!” complaints. Not everyone is concerned about this kind of thing, but the people who are, are pissed off while simultaneously being totally uninterested in discussion. I don’t have a good answer here.

  3. I am somewhat uncomfortable at placing the state of my forum’s spam detection on how a dozen different companies’ models happen to be feeling at any given point in time. Let’s be honest here: AI spam detection, AI triage, and all other AI features are basically us saying “Hey, let’s just make this the AI’s problem” and then trying to codify what we want it to do via prompt engineering. It works, but the process is annoyingly non-deterministic. You basically have to just hope that things keep working how they’re working. I do not like this. I do not like this at all, and it gives me anxiety. I like my tools to be properly deterministic. LLMs are the polar mf’ing opposite of deterministic, and we’re pinning some amount forum functionality on whatever the hell OpenAI et al decide to fling our way.

That being said, I’m using both AI antispam and AI forum triage. It’s helpful. But I try to remain cognizant of the fact that these solutions must be continually monitored for efficacy.

2 个赞

Who pays for the compute costs associated with the plugin?
Token generation isn’t free, and renting a GPU in the cloud costs about $250/mo for the cheapest plan (NVIDIA T4 from Google Cloud).

Does this plugin cost $250 per month?

1 个赞

I think most people are using a paid plan with one of the larger AI service providers (there’s a list of supported models here in the documentation).

Unfortunately I’m not aware of any affordable options for self-hosters - anything GPU-based I know is in the price range you mentioned, and I suspect, CPU-based inference will be too slow, even on more powerful machines.

1 个赞

Self-hosted Discourse AI lets you connect to different commercial models via API. I’m finding that my spend for AI triage plus AI anti-spam plus some AI nightly summaries is about $0.03/month with Gemini Flash 2.0.

3 个赞

I think we have a different understanding of “self-hosted AI” here. For me, that implies connecting to a model/service that runs locally. What you describe is possible, but effectively the local part is only a proxy, and data will leave your server/be transferred to the API provider. For many people that will be fine, but for me/our community, that’s not an option.

1 个赞

Apologies, I was referring to self-hosted Discourse. My mistake.

3 个赞