I guess I can’t really speak on behalf of others. But from my experience, I’ve mainly used Discourse forums (& Discord chat) for interacting with people on certain shared interests in products that are just growing, where I’m not sure it would be very useful to chat with an AI e.g.
Auxy app forum (music app, now closed but they’re on Discord) where people shared feedback on each other’s WIP tracks and gave tips specific to the app. There exists no prior written material on the app besides what the community writes.
Hopscotch forum (drag-and-drop coding app, where I work and helped to seed when it started). Again there was no written material & detailed guides on the app anywhere on the internet, before the community wrote all those posts, so any AI would be regurgitating what we wrote.
Sky: Children of the Light (game, community on discord but the premise is still the same) Again, I played it during beta before there was any written material at all. And I helped to WRITE the musical content on the wiki and have been with the community since before all that content was generated.
in all cases, the AI also hasn’t ‘used the apps’ itself.
So I don’t know that AI would have been helpful, except maybe to nicely word all the stuff that the community has generated, because there was no written content at all on those topics before all those people wrote it through discussions.
(I think Marx in Das Capital, after observing factory workers in England, said something like people don’t use machines; machines use people)
Not that this AI isn’t cool or useful, it’s just that it probably wouldn’t have been very helpful if it was present while I’ve been in those communities.
I run a forum that offers support for entrepreneurs.
Since November, I’ve created @Robolivier, a friendly AI robot that helps users when they mention him. He also replies to every new topics once in hopes of providing the user with a satisfactory answer. I’ve added a disclaimer to each of his messages.
So far, the users have been finding it amusing and useful most of the time.
I plan to train him on my own interactions on the forum, we’ll see how that will go.
Since it’s a paid forum, bots are not an issue for me. But I can see how an open & free forum could get spammed into irrelevance.
A crucial difference is that when AI can mass generate quality information under an arbitrarily large number of distinct “identities” (labels), it becomes cheap to—at critical/profitable moments—violate the trust that has been erroneously built up around those identities. As a human, I don’t want to violate trust I’ve built with others, not only because of my conscience, but because of the damaging effect to my reputation. But an identity artificially tied to an AI is vastly more disposable and abuseable for both capitalistic greed and political machinations.
So how am I feeling? Afraid of the future. This tech is highly likely to wreck us socially in the near future (years), and eventually (decades) destroy us physically if we don’t figure out how to stop it.
On one of the forums I run we have someone who we have suspected for years was posting under at least two different names with different personas, one more confrontational than the other. Recently he slipped up and signed a post from one email address with his name from the other, and then signed another post with a third name. So now we know for sure!
There’s a very interesting book where they run famous writers through statistical analysis of word usage patterns. It gets interesting when books are written by multiple authors, many of the later Tom Clancy books were largely written by the other writers.
This type of statistical analysis was how it was determined that J K Rowling was also writing under a different name, before she confirmed the fact. It was also used to analyze the Federalist Papers, concluding in 1962 that, as many had claimed, Alexander Hamilton wrote quite a few of them, including several that others had claimed authorship of. (They were published anonymously at the time because it was thought that knowing who wrote them might diminish their political effectiveness.)
So maybe there’s some value in have chatGPT-like interfaces in our forums if we can use it to analyze the writing styles of our participants.
Regarding people who answer by copy-pasting from ChatGPT, here’s a curious example of some backlash:
I’m also seeing this in the Forums I frequent the most (which is quite technical in nature). A specific new user has been posting 8 or 9 answers a day, all from ChatGPT.
There’s a lot of valuable information there, but there are also inaccuracies and stuff that is downright misleading (imagine a complicated 7 step list, that I look at step 6 and I know it will never work… but many people don’t).
This user seems to be trying to build up a reputation… we’re considering asking people to always explicitly label ChatGPT content as what it is. A request which we can only hope they will respect, but not really enforce.
This makes me uneasy, I don’t think this is going to take us to a good place. I fear that meaningless conversation will just take over the Internet. We might be forced to live real lives in the real world, I’m not sure we can handle that…
My older son delights in sending me articles and posts showing chatGPT inanities. I keep telling him that chatGPT technology is perhaps at a 2nd or 3rd grade level right now in many areas, like math. To quote one of my professors years ago, it doesn’t know what it doesn’t know.
I haven’t asked it for a recipe to bake a cake yet, I shudder to think what it might get wrong. (I’ve seen too many human-posted recipes online that are totally unworkable.)
I think meaningless conversation may have taken over the Internet when Facebook and Twitter were invented. Those of us old enough to remember USENET when it was still accurate to call it a video game for people who know how to read might draw the line even further in the past.
We’re definitely feeling this at the moment - running a free & open Discourse forum, and seeing a lot of ChatGPT-style spambots slipping past the existing spam protection.
Akismet isn’t keeping pace with them at the moment, as I’ve never seen so many spam accounts get through before. They tend to post innocuous-looking comments that answer a question (or rephrase it, and then answer) with a fairly wordy answer that appears helpful, but is missing some context or nuance from the original question.
I assume that Discourse forums are targeted as a testing ground, for similar reasons to WordPress sites a decade ago: it’s open-source tech that’s in fairly widespread use, so bots that can navigate the signup flow in one forum can probably get access to many others.
We don’t currently see the traditional spambot behaviour (create an account, post something innocuous enough to slip through anti-spam measures, and then edit it within 48hrs to add links) - so I’m not sure what the endgame is here. It’s possible that we’re just deleting those posts too quickly, and they’d be edited into backlink farms later on.
I feel like a mandatory 2FA should filter out most of the bots. Any thoughts?
Otherwise, I would probably get users to fill out a typeform after their account creation and add them to a “verified” group through a webhook on form submission & make the forum read only to everyone who’s not verified.
That being said, now that bots can browse the web and understand button intents, I feel like that’s a band-aid approach.
Everyone can rest easy, the Commerce Department has announced plans to make sure AI has appropriate ‘guardrails’
From the WSJ:
“It is amazing to see what these tools can do even in their relative infancy,” said Alan Davidson, who leads the National Telecommunications and Information Administration, the Commerce Department agency that put out the request for comment. “We know that we need to put some guardrails in place to make sure that they are being used responsibly.”