Trust level freeze

another idea

image 1

image 2

case 1:
  1. There is a report button for any user in the community, this makes the system more autonomous
  2. When you click on report the user a modal appears, where you can choose the option to report
  3. When this happens, the reported user receives a message.
  4. And the report is seen by the moderator or admin.
  5. If the user does not respond to this denouncement message - trying in turn to say that “it is false” or “it does not proceed”, as happens on Youtube, the process is done and finished.
  6. the event is added: Locking Trust Levels
  7. If this happens again, the event is added: Silenced user.
  8. If this happens again, the event is added: Suspend user
notes
  1. The administrator or moderator will verify the complaint process of the user who reports another user.
  2. If the requesting user’s complaint process makes sense (the moderator or administrator) forwards this message to the user.
  3. If the user has any reason that explains that the complaint is false, the process is closed.
  4. The admin or moderator can check the reported user’s response and decide whether to archive or not.
case 2

image

Note: The user thinks the account has been hacked, so the user reports his own account.

final notes

All the plugins described above are for automating this process that I describe now with images

How does the whistleblowing process does it occur automatically?

  1. From community users to other community users, ie when one or more users report one or more users - this happens when one or more users clicked on the button: “report this user to the community” of one or more users
  2. When only users of the type: moderators, administrators or community leaders are called by one or more users to report a post - this is known as flag post
  3. By the user himself, when he feels he is being hacked, that is, when the user denounces himself, notifying the system that the account has been hacked - this happens when one or more users clicked on the button: “report this user to the community” and i.s. self reported
  4. Administrators, moderators or community lead users are called these cases:
    • flag post
    • user reported by community users
    • user reported his own account, as he thinks it was hacked
  5. In all these scenarios, the plugins I’ve described are required to automate this process.
  6. The automatic options for moderators, admins or community leaders to manage users are these:
    • Locking Trust Levels
    • Silenced user
    • Suspend user

Note: This can only be done, if there is a list of users to report.

Notes

  1. Generates a list of users that are reported
  2. With this list, we can do the following. Users who did not ask for a response - this is done at first
    • Locking Trust Levels
    • Silenced user
    • Suspend user
  3. Users who requested a reply
    • Administrators, moderators and community leaders are called
    • If the user’s reported response is accepted or valid, everything is archived
    • If the answer is not accepted, the user has one last chance to respond, and if they don’t, the process is done and done.
  4. We can see this list of most common and uncommon notification cases within Discourse. including the year, month, week and period that occurs (year to year, month to month, week to week) - that would be my initial idea
  5. In my opinion, the best way or the easiest way to do this - without harming the Discourse system modeling or even a practical way of doing it, would be to have a temporary database to receive reports from users, a database like mongodb could be used for this.
    1. I think of using mongodb to receive information from users that are reported
    2. mongodb works well - mongodb is a document-oriented database. This is interesting if you have a high volume of data and that data is not structured - famous nosql
    3. In addition, the database is not permanent it is temporary. mongodb supports temporary data types
    4. I thought of this idea when i read this: Running Discourse with a separate PostgreSQL server - I thought I’d have a separate mongodb database for this
    5. According to this page: GitHub - discourse/discourse: A platform for community discussion. Free, open, simple. , Discourse uses redis and postgresql. Have you ever thought about using mongodb for this case I described?

idea summary

“reported users - mongodb”

reports: {

report1:{
 user: "user001",
 linkFlagPost: "https://meta.discourse.org/t/post-test/1122344"
 reason: "flag post",
 reportedUser: "user002"
},

report2:{
 user: "user001",
 reportedUser: "user003",
 report: "spam"
},

report3:{
 user: "user001",
 reportedUser: "user003",
 reason: "abusive messages or behavior"
},

report4:{
 user: "user001",
 reportedUser: "user003",
 reason: "user discloses illegal user data"
},

report5:{
 user: "user001",
 reportedUser: "user002",
 reason: "user posts dubious links, links that contain viruses, malware"
},

report6:{
 user: "user001",
 user_report: "user002",
 reason: "specify another reason"
},

report7:{
 user: "user004",
 reportedUser: "user005",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community"
}

notes

use sha256 to anonymize the data of whoever sends the report and who receives it

reports: {

report1:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 linkFlagPost: "https://meta.discourse.org/t/post-test/1122344"
 reason: "flag post",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report2:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "spam",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report3:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "abusive messages or behavior",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report4:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "user discloses illegal user data",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report5:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "user posts dubious links, links that contain viruses, malware",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report6:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "specify another reason",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report7:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community",
status1: "filed process"
status2: "Locking Trust Levels"
},

report8:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community", 
 reply: {
    reportedUser: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
   reason: "View logs in account - if you confirm this process, we will lock the trust level", 
   replyFrom: "administrator",
   status1: "filed process"
   status2: "Locking Trust Levels"
   }
}

Some problems in this idea, solution
  1. It may be a wrong solution, I need the feedback from the Discourse community to know if the idea is valid or not.
  2. Implementing this can be complicated
2 Likes