How to make it easy to specify if content is human-written, ai-assisted, or ai-generated?

I’m one of the administrators of a forum of 10k+ members. We have noticed that some people started to heavily use AI for replying to posts. It is good that they are getting more active and trying to be helpful, but some of responses can be confusing, as AI chatbot answers tend to be very confident yet inaccurate. We would not like to ban AI use, but instead encourage people to disclose if it is used, because this may help other users (and LLMs that are trained on the forum content) to decide how much they trust the content of the post.

We would like to add a “Content origin” selector to each new reply post with the following options:

  • Human-written
  • AI-assisted (reviewed and edited)
  • AI-generated (lightly reviewed)

I would assume that many other forums would need a similar functionality, so maybe it could be added as a standard feature or plugin. But if that will not happen, it would be great if someone could give advice on how to set this up on our hosted discourse instance.

3 Likes

If you trust your users, you could just create tags for it? Oh edit: Sorry just read you said “every post

Or do you mean you want it “detected” (by AI ironically) and automatically have some sort of indication added?

This sounds like a use case for the feature request tag replies, not just topics, where it’s been noted that:

…so you might chime in to upvote that request.

I’m not a developer but I think a non-tag approach would probably make use of the post_custom_fields table that’s used by plugins like the Custom Wizard Plugin. Incorporating a custom post field into the UI — for selection, display, searching/filtering — sounds like a pretty substantial plugin project.

1 Like

A plugin won’t work on a forum hosted by Discourse

I think you could also do something with a theme component. Maybe one that simply adds the information as a [wrap] to the final post and includes some styling on how the information is highlighted.

1 Like

I would trust the users, but if AI use is obvious and not disclosed then moderators may also mark the post accordingly.

It would be a nice convenience feature to suggest to mark the post as AI-assisted or AI-generated, but I’m not sure it is feasible, and it is not essential. We could force the user to think about it if by default the selector would not be initialized, so users would explicitly need to set the content origin.

It might be easy to forget to add a tag (users would need to be reminded more often to use it), but it could work.

This sounds like a good approach. How much work would it be to set this up? Do you think there is a chance of this being added as a built-in feature or official plugin?

There is always a chance but I think it’s fairly low that this would happen within the foreseeable future, unless that feature request would explode with votes.

With the current tag functionality, you can require topics in a category to have a tag from a specified tag group. While reply tags don’t exist (and sound unlikely any time soon), one could hope for a similar ability there.

If the purpose is to inform a human reader that a text involves AI, then in my view a tag is not the solution. People do not read tags as part of the actual content. The indication therefore has to be part of the context itself.

I personally use the method provided by Discourse:

<details class='ai-quote' open>
<summary>
<span>Title</span>
<span title='Discussion with AI'>AI</span>
</summary>
More or less content…
</details>

Alternatively, I use Quote Callouts component.

Teaching users to apply these correctly is, if not impossible, then at least extremely difficult for users coming from a Facebook background. So I have settled on requiring that AI-generated content must in some way be marked. As long as there is a visible indicator within the content itself and the AI material is clearly separated, I am satisfied.

I did try using a tag, but I noticed very quickly that it drew no reaction. In addition, I want to use tags for content classification, and it is hard for me to imagine that anyone would ever try to find all the content on my forum that involved AI. Especially since the use of AI is likely to escalate over time. It might even reach the point where a tag would be needed to indicate that content is human-made (and not just with an emoji, since such websites and even forums already exist).

5 Likes

do you enforce this requirement somehow?

1 Like

Low traffic forum, so basically I read almost everything. And moderators can/will flag possible cases. It isn’t so obvious every time if AI is heavily used or copy&pasted. I don’t know how that would be automatic, so we use neighbour watch :smirking_face:

But my users know the rule quite well.

2 Likes

A bit off-topic, but on a (non-Discourse) forum we have a topic to discuss everything about generative AI, from pros to cons.
An enthusiastic regular user started to post very argumentative messages to argue and contest other members’ points in a slightly heated debate.

How his latter posts were written gave it away: in particular, one post containing 4 bullet points all using the “It’s not… It’s” structure.

He was called out and admitted to using AI to write his “last two messages” and explained it was not an issue because he instructed the AI to reflect his opinions. And that if people couldn’t refute his points, then it meant they were wrong regardless of whether his posts were AI or not.

Needless to say, the other users were pretty pissed off when they understood that they were using their time and energy to debate with an AI.

7 Likes