To further enhance content accuracy, I propose a Community Notes feature similar to Twitter (X), allowing trusted users to add context to potentially misleading posts.
Key Features:
Eligibility: Only users above a certain trust level (e.g., TL2+) can submit notes.
Voting Mechanism: Notes gain visibility through upvotes from diverse users; misleading notes are downvoted or flagged.
Transparency & Oversight: Notes remain public, with logged contributions to prevent abuse.
Benefits:
Improves Accuracy: Provides factual clarifications rather than punitive measures.
Just reply this misinformation it’s not enough?? this function is redundancy to me cause report button and staff together with trusted users could this replying or applying official note
If replies were enough, Staff Notes wouldn’t exist in the first place, right?
Whether to use replies or a dedicated feature is a UI/UX design choice. Simply replying means that misinformation and corrections appear with the same weight, making it harder for later readers to determine what is accurate.
Additionally, while staff-managed notes might work for small communities, in large communities, the workload for staff becomes overwhelming. This is why allowing trusted users to contribute to Community Notes is valuable.
It would take a lot of work to glue the pieces together, but I wonder if you could leverage Discourse Post Voting by having a special flag that auto-creates a topic to vote on the best community note. Then when a note/reply is approved or reaches a voting threshold, an Official Notice is added/updated to the original post.