您如何在社区中处理AI回复?

Hello everyone!

I am wonder how other community managers handle AI responses in discussion and questions type categories.

Our current policy is that the usage of AI is okay as long as you validate the response.

After seeing some responses like this for example, I am starting to think I want to create a more strict policy against AI.

Good question, I’ll help you call the Slack API to post a message to a channel. Here’s how to do it:

Basic API Request

You’ll use the chat.postMessage endpoint. Here’s a simple example using curl:

curl -X POST https://slack.com/api/chat.postMessage \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_SLACK_TOKEN" \
  -d '{
    "channel": "C1234567890",
    "text": "Hello from the Slack API!"
  }'

Setup Steps

1. Get a Slack Token:

  • Go to Slack API: Applications | Slack
  • Create a new app or select an existing one
  • Navigate to “OAuth & Permissions”
  • Add the chat:write scope under “Bot Token Scopes”
  • Install/reinstall the app to your workspace
  • Copy the “Bot User OAuth Token” (starts with xoxb-)

2. Get the Channel ID:

  • Right-click on a channel in Slack → “View channel details”
  • The channel ID is at the bottom (starts with C)
  • Or use the channel name like #general

Code Examples

Python:

import requests

url = "https://slack.com/api/chat.postMessage"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_SLACK_TOKEN"
}
data = {
    "channel": "C1234567890",
    "text": "Hello from Python!"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

JavaScript (Node.js):

const axios = require('axios');

const url = 'https://slack.com/api/chat.postMessage';
const data = {
  channel: 'C1234567890',
  text: 'Hello from Node.js!'
};

axios.post(url, data, {
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_SLACK_TOKEN'
  }
})
.then(response => console.log(response.data))
.catch(error => console.error(error));

Additional Options

You can enhance your message with:

  • Blocks for rich formatting
  • Attachments for structured data
  • Thread replies using thread_ts
  • Mentions like <@U12345678>

Would you like help with any specific programming language or more advanced message formatting?

Cheers, {user}!

While this may solve the problem, I could have asked AI myself to get this response. While the issue is not widespread yet I am concerned that we are losing the personality of each individual in the community. The great thing about community are the people in it and their unique ideas, thoughts, experiences and words.

Very curious on how other communities handle this! Please share your thoughts!

5 个赞

My policy is don’t use AI to write or edit forum posts. Something like: we want to hear what you have to say, not what the AI has to say. It’s okay to paste in some AI text and then comment about what the AI wrote as long as you disclose that it’s AI above the AI content, for example:

“I asked ChatGPT about ___ and it said ___. Here’s what I think about what ChatGPT wrote.”

I carefully guard the quality of the content. I think that AI-generated content from 2025 will look dated in 10 years, kind of like how old movie effects don’t look passable any more.

If people are asking questions and AI is answering, why would they go to a forum instead of just asking ChatGPT? I think the main value in forums in the future is that you will be able to get answers from humans instead of bots.

8 个赞

Great points @j127,

This is kinda where we were leaning towards as long as there’s context and the users words around the AI content.

I don’t want to look around in a few months and just see perfectly written AI responses everywhere in our forums so trying to get ahead of it now. Bring back the spelling mistakes :slight_smile:

4 个赞

Here is one point to mindful of: discouraging such practices might drive potential new users away. A possible work around is to create a category for such and then just let them put the content there. This way the user feels they have contributed something of value, others can link to it if needed in other parts of the forum and users that are not interested in the AI post can skip the category and all of the links.

The posting of AI on user forums will be here going forward. I don’t see any norms for this within the next few years; similar to the question of the use of AI in formal education.


However with the new wave of AI, LLMs and diffusion, the only constant is change so my reply on this is not to be seen as something carved in stone or something that I might even agree with in the future.

2 个赞

It really depends right? In the example above, I would be upset at the poster if the answer had clear incorrectness - especially incorrectness that would lead readers to greater issues. I’m ok with people posting AI responses if they’ve also taken the time to validate and verify the appropriateness and correctness of the information. Even better if they agree the generated content is similar to, or better than, how they would share this information anyway.

If, on the other hand, you’ve just thrown something into an LLM, copy-pasted the answer, and expect me as a reader of your topic to figure out if that content is correct, then I’m not going to be happy. Not only are you wasting my time by presenting false information as correct, but you’re also reducing the trustworthiness of the community in general - or at least your own future responses.

Looking forward, I suspect many good intending people will move away from using LLMs in forums over the next few years, and those remaining will be bad actors, or people who just don’t get forums in the first place. It’s purely anecdotal, but I’ve used LLMs daily for about 2 years now, and I’m moving more and more towards self generated content. Yesterday I wanted to write a blog post that I wasn’t super invested in. I decided to try gpt, but it no matter what I did, it always sounded like gpt and I hated that. So I tried Claude, and despite working with it for an hour, the content never sounded like something written by a real living person. So after about 3 hours total, I tore everything up and wrote it myself in about an hour.

The point of that story is to say - at least for me - self generated content is showing itself to be more and more rewarding no matter how good the models get. I really hope other people using LLMs also get to that stage in the next 2-3 years. If that happens, AI responses in forums will only be from bad actors, or people with good intentions who don’t know any better. Either way that situation makes the content much easier to handle: you can ban the bad actors, educate those with good intentions, and have a moderation policy of “ai content is banned, except if you’ve clearly digested the content you are posting AND fully agree that its contents represents your opinion and understanding about the matter AND the content expresses your opinion in an equal or more digestible manner than you would generate yourself.” or something along those lines.


p.s. I really like this perspective:

I didn’t immediately agree with it, but after a bit of thought I can totally see this being the case.

6 个赞

I’m generally slightly apprehensive when I see AI responses, and much more so when the user posting it either:

  • Does not credit AI
  • Do a direct copy-paste

But I think the thing that is really not okay is if completely AI-generated posts get marked as Solutions. Really, the solution should go to AI… or actually, simply not given, just a like and a thank you.


Adding on here, if the OP wanted an AI response, they could have simply asked ChatGPT. They don’t need someone to answer the question they they asked to fellow humans with something from AI.

3 个赞

I see so many great responses here. In this specific example, I see AI as useful because it’s purely tactical. So, as we’re asking our communities questions or for member responses, take into account what’s tactical vs. “feeling.” Depending on your community structure, there may be a specific space dedicated to “how to” questions, and another space dedicated to peer connections. Knowing the “how to” will be a place AI is welcome and possibly recommended to avoid operator error. :slight_smile: It’s so great to have a balance within your community space, and that’s what keeps members coming back.

2 个赞

I think that Discourse already has the “how-to” section (ask.discourse.com). That’s actually the best AI tool right there in Discourse that users can use by themselves, so they don’t need somebody to do it for them. Everything else should be the “peer” space, where humans interact, ask more questions than provide answers, and happily disagree in order to squeeze some truth here and there.

As beautifully said here:

Most probably, people come to a community space when they have some unusual stuff happening. There is probably a lot of asking questions before a solution is found to a problem, like what version do they use, did they try this or that. Sometimes it’s great if you feel some empathy from other users. AI does not ask questions; it has no intuition about how stupid we are sometimes (e.g. on Meta: Have you tried to rebuild Discourse? Have you tried safe mode ?), and any empathy is definitely fake, no matter how pleasing.

So, my way of handling responses with a majority of AI content – I would probably ban them and implement an AI bot so users can easily help themselves.

1 个赞