What is stopping you from trying out Discourse AI?

As a goal for this year we are trying to Increase adoption of Discourse AI and its features. I’m trying to understand holistically what is stopping you from trying out Discourse AI within your community.

I understand there could be feature-specific aspects that could be bothersome but in this instance, I’m trying to get a bigger picture sense of what those problematic reasons could be

Additionally, if you are willing to chat with me about it over a call, I would greatly appreciate any feedback :pray:

9 Likes

AI abuse. I mean by that as in that people are using AI to like, write EVERYTHING. It kinda makes humans worthless, but that’s kinda inevitable for all AI. But I might soon (if I could pay)!

7 Likes

Yeah I get that, we are thinking of making Discourse AI a helpful assistant within your Discourse journey rather than relying on it to “write” everything out; while also making people aware that it is AI-generated content that is being displayed for GenAI feature

6 Likes

In our community, the users will spam with junk answers or solutions to tutorials with the feature which was very common right around ChatGPT launched…

4 Likes

I’d like to be able to manually assign individual POSTS to one or more AI personas as their knowledge base.

2 Likes

So expensive this moment and I won’t my users typing or create AI content when focus is human discussions.

3 Likes

We’re not on Discourse yet (evaluation still going on), but conceptually speaking, there are two major blockers:

  1. Our community is mainly tabletop roleplaying enthusiasts and human creativity is greatly valued - as a result, there’s a sizeable portion of our users with a negative perspective on generative AI. There’s also a lot of experimentation with the new tools, but effectively any use of generative AI needs to be carefully considered.

  2. We are self-hosting and have a limited budget (contributed by some of our users) - it’s enough for a decent server to host Discourse (or any other community software), but the cost of both current LLM as a service offerings and dedicated GPU servers is too high to fit it in. And the expected benefit is also too low to justify it.

6 Likes

I echo some of the comments above, especially fear of people writing too much with AI and it becoming harder to tell what a human “wrote” vs did not.

However, I think a bigger fear I have is using a remote AI. I’ve often debated on how public I want my Discourse forums to be and I guess I just don’t have a heuristic for how privacy-invasive AI APIs are these days.

I think if there were more powerful and cheaper self-hosted or even on-device LLMs, then I’d feel more comfortable. Not sure how an on-device LLM would benefit Discourse, but something about using a remote LLM causes me hesitation these days.

3 Likes

Would a Discourse-hosted open source LLM feel more comfortable than another third party provider? (though probably a little less than on-device?)

8 Likes

Maybe a little but I think I’d prefer more self-hosted or on-device (again, if that could even work for a self-hosted forum).

Maybe if it were Discourse-hosted with the option of self-hosted, just as Discourse is now, yes, I’d be more excited for it. At that point, I think my concern would shift to quality of LLM output and cost to run.

6 Likes

I suspect you’re wanting to hear from people with more established communities than mine, but I’ll throw this out there.

There’s nothing stopping me from adding some features that are powered by AI to my Discourse site, but I’m interested in the features, not the fact that they’re powered by AI.

For example, semantic search and the ability to auto flag posts (so that they can later be reviewed by a human) are both features that I’d love to use. The ability to translate posts on the fly would also be useful.

Some examples of what I’m talking about in terms of marketing:

  • when you translate text through Google, they don’t say “translate with AI,” they just offer to translate the text

  • when you perform a search on Google, they don’t say “search with AI” even though AI is involved in returning the (currently not great) search results

  • when Soundcloud suggest tags for an uploaded track, they don’t say “tag with AI,” the UI just suggests some tags - the end user isn’t encouraged to think about the (amazing) technology that allows the system to suggest appropriate tags based on an analysis of the uploaded track

Some AI related features, in particular anything related to chat bots, could be felt to go against the spirit of what online communities are all about. By grouping all the AI related features under the heading “Discourse AI,” you’re risking turning people away from features that happen to be powered by AI, but that also might be incredibly useful for their forums.

I’m not a marketing genius, so take all of the above with a grain of salt.

28 Likes

Essentials Tools made with AI to help moderation it’s okay to me See I want a way generate images but I refuse all way to create using AI if hadn’t the risk in lose spirit of community

Socials medias are dying for a reason, that’s the point.

2 Likes

I’m using it. On my forum AI is used to explain things[1]. But I don’t allow it for general members because it is just too expensive — I’m on scale where 50 bucks a month is awful expensive :wink:

But its main issue for me is not fault of AI but how much work is needed to create decent knowledge base. But for me the greatest weakness of AI is how badly it is using embeddings and then content created by community is not in use good enough.

Yes, I know. It mostly comes from limitations of OpenAI. And the tech is not that production ready as claimed. Plus there is a language barrier.

Cherry on the top: creating good prompting is really hard. And after some weeks something changes and prompts must fix again.

It may help with some background jobs, some day. But when looking team’s struggling (and from my POV, big waste time and money, sorry :woozy_face:) with daily etc summaries, I claim that it needs too much such work that doesn’t actually increase revenue, no matter if it is counted in dollars or happiness. It is not decreasing workload of admins and devs, it increases it.

But of course big corporates where some 10 grand is less than monthly filling of energy drink stock, can and should automate things using AI, including support forum by Diacourse. That is, and has to be, target of CDCK — but I don’t know how many of those are here telling theirs thoughts.


  1. as: explain rules of baseball and give a guess why it is so popular sport even it so slow and even boring ↩︎

3 Likes

Seems like the general sentiment is around GenAI either causing issues or has the potential to wreck havoc within communities, taking away from the natural exchange and human creativity. I think at Discourse AI we are trying to strike a balance there so its only there to enhance your writing not write it out for your completely

@simon great point about “using AI” on features, certainly the branding behind this can turn some people away and conversely make people appreciate the awareness that they are interacting with AI

7 Likes

Mostly inhibited by the fact that it is not free.

5 Likes

We run a small creative community for interactive fiction fans, creators, authors, and artists. We don’t usually run into problems where the moderation duties are too great, and our mods and many of our community are happy to answer questions. So we wouldn’t use AI to “do forum work”.

While we often discuss AI in the context of “text generation” as it relates to the games we create and many people are interested in it, we have made a rule that actual AI generated content is not allowed to be posted in a situation where an AI construct is “pretending” to be a real forum user and trying to fool people in an attempt to pass the Turing test. We do not want our forum used as a testing ground for AI nor our historical content back from 2006 used a field to scrape for text content.

If people post AI generated material either for their own purposes as a writing assistant or as a demonstration for a game they are building, we ask them to disclaim what it is or cite it like a source as it is not technically “their own” material. We’ve had several spammers that tried to participate and raise trust levels by (I assume) feeding a topic to an AI like ChatGPT and posting the results so it would seem like legitimate forum participation. Since our entire genre of art is creating a text work that “responds” to interaction by a user (Think text adventures like Zork) most people can peg content that is machine generated pretty easily and flag it.

An AI or LLM cannot agree to the Terms of Service and Code of Conduct. If it happens to offend anyone there’s no way to moderate it. Since we expect our users to be transparent about this type of content, it would probably be hypocritical for our site to use any sort of AI generation for forum content and responses.

We also had to quell a flame war that threatened to erupt because many of our users are graphic artists who are very sensitive about their online works being stolen as source material for AI art, and a topic about using AI art in games quickly devolved into personal attacks against people who were okay with it. So AI is one of the subjects we have to watch closely to make sure it’s not discussed in the context of willful plagiarism.

TL;DR: AI is a hot button issue that we had to make rules about so we probably shouldn’t use it in the context of a forum populated by lots of artists and writers.

10 Likes

We’re sick the AI bandwagon, when it mostly produces a mountain of garbage and causes environmental damage in return. Can’t wait for the bubble to burst and for everyone to be obsessed with whatever stupid thing comes next or, even better, focus on something that’s actually useful or just improve basic functionality which everyone’s ignoring lately to cram AI garbage into everything because everyone else is doing the same and it’s what perpetually ignorant investors are throwing money at right now…

Probably not the answer you wanted, but that’s the honest truth. :slight_smile:

5 Likes

Main barriers over here are navigating internal restrictions on third-party tools and data access (to get permission to start), and not having a clear method for estimating costs (token usage, etc) which prevents us from getting budget approved to start using the new tools.

I’d love to start using some of them, particularly the sentiment analysis reporting. There feels like a significant gap between awareness of the tools and implementing them, though.

There are also peripheral concerns around bringing even more GenAI content into our community: it’s the single most critical topic affecting our community members and their livelihoods. We also see a lot of new forum accounts posting obvious AI summaries of threads (and some fairly inaccurate attempts at “answering” questions from existing posts), so our users already feel like the human element of the community is being drowned out by AI tools, rather than being supported or augmented by them.

5 Likes

I see a lot of comments focusing on AI-generated content. Speaking for our enterprise community, I’d like to see continued investment in the administrative side of Discourse AI. That is, tools focused on helping us show value in our community to our leadership teams. Because of our (relatively) clean categorization and tagging, we used Discourse AI quite successfully in our quarterly business reviews to show things like l, “here’s the sentiment of this subset of users, in these categories, discussing this product, over time.”

Graphing that and showing our organization a more quantifiable visual of the value of community by seeing a trialing indicator like this netted incredible response and continued investment. More of this would be great, and would get us to dive deeper.

On the subject of the AI bot, we’d really like to see more customization in the ability to middleman the AI bot experience. We have strict requirements on how we can approach sending user data into AI services, and as such we’ve built our own AWS lambda service between Discourse and AWS Bedrock. The ability to specify the endpoint that the bot works with, along with documentation on the object model, would allow us to fully realize the existing AI bot in our community. For now, we’ve built our own bot experience using PMs and webhooks to simulate a box experience for now.

Anything you can do to bring AI more to the administration and moderation of Discourse would be the best investment, from our perspective. We need to build and support this community in a scalable way for the business, and I believe there are many opportunities around Discourse using AI to enable that.

11 Likes

Please reach out to team@discourse.

Its a huge shame not to have “related topics” enabled on your site and this is 100% hosted by us (and already included in all plans) so there are no data concerns or token concerns or pricing concerns.

Similarly we self host the sentiment models and would be happy to explore them with you. I remain a little bit skeptical about sentiment cause I am trying to understand what specific example problems this solves.

We are also exploring running open models on our hardware, llama 3 is surprisingly capable. It is possible we may be able to power features such as summaries on our own models longer term, especially if we can get the 7B models to do them confidently.

I hear you on budgets, Discourse AI already tracks token usage per person, we intend to add quota into the plugin.

This is very much on our roadmap, we are working on this right now, the ability to point at a URL and specify the “dialect” it speaks.

9 Likes