How to prevent community content from being used to train LLMs like ChatGPT?

OK, we are thinking the same.

I see two replies that really scared me and I don’t want to pay but soon or later that could be mandatory for the working one.

(I didn’t give my credit card number and all the time use temporary everything, at least for stay a little off the track)

But people are paying and jumped to 4 and 10X, then a 100X, 24 dollars a day. I work in markets directly and that’s surreal.



I usually don’t use this device to search the web (choose captchas for a couple of big business) because I feel more secure and private browsing in Linux. I suspect someone could think in a similar way and I respect if that’s not your case.

Open-source is some kind of controlled too, could sounds a little neurotic or something but I prefer humans conversations in our community and we are discussing limits and maybe use the methods for block something that nobody knows where can stop.

Hallucination was injected, people are cloning theirselves. That could break the information and spread very much control in a joint.

Maybe we are in a good moment to discuss about limits, values, privacy. Not censor, make complaints or avoid a good discuss.

If we are OK in this topic I should share my points and research in deep about my not solid but real points.

AI without OpenAI (not open) could be possible and better tool for communities?

Please, move if you consider that’s OP, or merge if you want.