The blog talks about how GPT4 is being trained to act as a moderator for internet content based on policies.
This applies directly to Discourse sites that use human moderators to filter content based on policies.
As a human moderator on a few sites, I use the policies from both the organization and the Discourse forum to decide on actions to take regarding post, more specifically when they are flagged. It is not uncommon for me as a moderator to let a post remain that was flagged if it is not in violation of a policy.
I know you asked for my thoughts but I try to keep such responses to the facts; hopefully I stayed within my own bounds.
My goal was not to start a discussion, it was to pass along information that I consider relevant to both Discourse the company and those that run sites with the assistance of human moderators.
While I tried to stay to just the facts for the info provided in the first post here is a point that I feel might be worth discussing.
Should a site rely solely on an AI moderator or should humans be in the loop? More importantly what happens when a AI and human disagree?
The use of the word loop was by choice, not coincidence; ever listen to Apollo missions from flight control and hear note of the word loop?
An very interesting side note related to this that many may not see is this is the same problem that manned space flight grappled with; should the spacecraft be fully autonomous or should a human be able to take over partial and/or full control? This can be considered the start of autopilots for planes.
A fascinating book I personally read on the subject, learned a lot about an area of code and design not seen in any other projects.
“Digital Apollo: Human and Machine in Spaceflight” by David A. Mindell (site) (WorldCat)
Certainly keep humans in the loop. Have you seen Robocop? Seriously, there’s a lot of judgement in moderation. While it’s good to have guidelines, and to use them, running them as a set of strict rules seems to me foolish - a technical solution to a human problem.
Edit: I would add, I run human-scale forums, where the moderators know each other and know the regularly contributing members. I’m sure OpenAI are thinking of industrial-scale forums, run for profit, where the moderation challenges are quite different.
Discourse is an ideal platform for learning and growing and building these new tools.
I’m all in on embracing the AI revolution as a decentralized, inter-connected “neural network” of networks through which already existing societies and cultures of humans can begin to evolve and collaborate more efficiently, and begin to turn the tides on the scourges of polarization enabled by the centralizations of power and control wrought upon us by the recent iterations of “social” networking.
Humans should be in the loop, but I think having some level of AI moderation would be a big selling point for any public institution that was considering having a forum. For example, if the CBC could somehow be convinced to have a forum.
The balance might be something like, human moderators would use their human skills to guide conversations, while (human supervised) AI would enforce the site’s guidelines. AI moderation could potentially make forums more willing to allow conversations that deal with polarizing issues.
i want AI assisted moderating for catching problem users making new accounts after being suspended. dealing with such consumes more of my time than i like, and i’m not even one of the moderators (we have segregated roles).
Is there any AI functionality like this for integration on discourse. I think it would be really helpful in assisting with moderation of my community. Please let me know! Looking for solutions to save time for posts that breach our guidelines.
Content moderation plays a crucial role in sustaining the health of digital platforms.
A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours. GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling. We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators. Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system.
While OpenAI sells you tokens to use the rides at your fairground, one “mental burden” is replaced by another “mental burden” (assuming you care), can you spot it?
Whilst I’m absolutely loving this tech and have made it my business to understand it as well as I’m able and know better how to use it, it’s clear Open AI (and services like it) are really onto something here from a business perspective.
The implication is that for every post made you have to give $ to Open AI for potentially multiple reasons (summarisation, post grammar, post NSFW, embedding profile, the list goes on).
Before you know it, Open AI is getting a decent cut for all throughput on your own site. All that hard-won ad and affiliate revenue you have? Some of it is now diverted to them.
And Cloud VPS providers are going to love all the people upgrading their 4GB server to their 16GB offerings in order to use open source models locally.
That said, of course no-one is going to do this at scale unless it is more efficient to do it this way and its competitively important to do it.
And less pessimistically still, another way of looking at the services provided is a democratisation of technology that used to only be available to the big providers.
For example, semantic search is now cheaper than ever and small installations can now use powerful search at a very affordable price point. That’s surely got to be a great thing?
The power of the natural language processing is quite astonishing even in the cheaper models.
And we’ve only just started this new era, I’m sure prices will come down as competition flourishes.
It’s entirely relevant and imho certainly not off-topic.
How many posts daily on the site you run?
If you have to pay $ to Open AI for every post contributed, that’s a consideration.
What is the marginal value of a Post to your ad revenue, customer engagement?
It has to pay its way ultimately to be justifiable.
I’ve no doubt it might lead to a material reduction in community management hours and improve overall quality of a site etc. that might establish a strong case for it.
It’s not just about the facility or the technical feasibility of it, though, these things need to be considered?