Help with Discourse AI

I want to use Discourse AI. I have some questions regarding this.
What data is being utilized by Discourse AI (and any LLM like OpenAI or Google LLM)? Does it include any sensitive data?

Is any of this data being used to train the LLM?

Thanks for the help!

“Discourse AI”, the plugin, is agnostic of the AI model provider. For every feature, you can pick the provider that aligns best with your needs, whether it be privacy, feature set, pricing, etc.

If you are using, let’s say, AWS for the AI provider, then you are subject to AWS Bedrock terms of service regarding data being used for training, etc.

3 Likes

Thank you, Falco.

Do you know how it works with Google Gemini LLM? I’m most interested in the summarise, sentiment and chatbot features.

1 Like

It works well with Gemini. We have customers using it.

1 Like

Oh great!

would you know about this?

What data is being utilized by Google Gemini LLM? Does it include any sensitive data?

Is any of this data being used to train the LLM?

This is something you’d need to ask Google.

We have dozens of features in Discourse AI, and they will send data to your preferred LLM provider. For example, the summarize feature will send the topic posts and ask for a summary. If your posts contain sensitive data, that will be sent.

We log everything that is sent to the database, so you can audit it anytime.

And if you have stricter data locality requirements, you can run an LLM on your own servers to keep data under your control.

1 Like