Trust level freeze

Feature name

Trust level freeze

Feature description

  1. Occurs when the user does not want their trust level to increase. Because he knows that as the level of trust increases, he will have more and more responsibilities. There are users like me who just want to be regular users and not users as leaders.
  2. Another interesting fact would be to freeze the trust level for users who somehow act strangely, not acting in accordance with the security policy of the community. That way, they will always be basic users or regular users. And so it will have a higher level of restriction, security since users with the trust level freeze cannot raise the trust level.
  3. Often you want to freeze users without necessarily having to close the account.

Reasons for implementing the feature

  1. Make Discourse’s security level more secure
  2. Freeze certain users who don’t follow the community policy for some reason and thus cannot receive a higher level of trust. Many systems or software when the user violates some rule, usually the account is closed or is frozen. Here would be the case of account freezing. There may be an account freeze time, maybe 1 day or 1 year.
  3. It’s a security measure.
  4. The user can request that they want to be frozen. Maybe because the user may not want, like me, to be the leader of something.
  5. Twitter uses the term as a suspended account. Usually the account is suspended for this: spam, account security at risk, tweets or abusive behavior. In my opinion it would not be a suspended account, it would be a frozen account. The user still has the possibility to use the account, it is not suspended.

references

  1. I searched Discourse and didn’t find much related to this idea.
  2. If there is any related idea, the best would be to merge that idea.
  3. If anyone can read and pass on the feedback I would appreciate it.
  4. Help on your suspended Twitter account

You can lock a user to a trust level from their admin/user page:

10 Likes

You can lock a user to a trust level from their admin/user page:

  • Didn’t know about this feature, thanks for the feedback.

These are my questions, if you can answer I would appreciate the clarification of this:

  1. Can I set a lockout time, maybe for 1 day or 1 year or something?
  2. Another thing, can I as a user request that I don’t want to level up my trust in the Discourse system?
  • My account freeze idea has to do with these questions

There is currently no timer option when locking a trust level; it’s turned on and off on a manual basis.

It’s possible for users to request for their trust level to be locked, if the admin of that forum is happy to do that for them. :+1:

Just spotted your edit:

Discourse also has the option to Silence or Suspend a user’s account, if the admin feels it’s necessary. These options do have timers.

You can read more about Locking Trust Levels, Silencing, and Suspending in the:

2 Likes

@JammyDodger You were amazing, clarified all my doubts. But, could you clarify whether these complementary ideas are good or not?

There is currently no timer option when locking a trust level; it’s turned on and off on a manual basis.

  • So what this post brings news to Discourse would be this: Offer a timer option when locking a confidence level as this is done manually. This is interesting, looking at this observation:
    1. Can you imagine configuring this process manually for 100 users? That would be a little exhausting, I observe. I imagine it would be better or more efficient, an automatic process with some rules. Like, for example, setting a time of maybe 1 day or 1 year or something.
    2. On Twitter, if I’m not mistaken, you can explain some reason not to suspend your account. If you provide something that explains that your account should not be suspended, your account will return to normal. And if you don’t explain the reason, your account will be suspended for a period of time. I think this scenario could be applied here in the Discourse. That is, if you explain some reason for not having the trust level locked, your trust level will go back to normal. If you don’t explain why, your trust level will be frozen for a period of time.

Scenario with locked trust level automatic measurements(When the timer that blocks a trust level occurs is active)

  1. The user must be informed of the reason why their trust level has been blocked. note: This occurs in the following cases: “Spam”, “Account security at risk”, “Abusive messages or behavior”, “Report - when users ask for their trust level to be blocked”.

  2. The user can request that he wants the trust level blocked, but for that he must inform the reason for blocking the trust level. For example:

    • “I request that the trust level be locked as I don’t want to be the leader of something”. note: This refers to the reason of a personal nature, ie the user wants.
      -“I’m traveling and I would like to block my trust level, I don’t know if they will be able to hack my account in this period”. note: This refers to the reason of a personal nature, ie the user wants.
    • “I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community”. note: This refers to the account security reason.
  3. Users can request that a user’s trust level be locked. Some reasons:

    1. “User sends spam constantly”. note: Refers to excessive posts from a specific user or from fake accounts that are reported by other users in the community.
    2. “User does not agree with community policy. I would like the community to evaluate the case. I understand that when requesting that the trust level be blocked from a user. I too can somehow be held responsible for this.” note: Refers to a specific user’s perceived abusive messages or behavior reported by other users.
  4. Only the admin can allow a user’s trust level to be blocked. Notes:

    • This occurs in the following cases when the system evaluates as: “Spam”, “Account security at risk”, “Messages or abusive behavior”, “Report - when users request that their trust level be blocked”.
    • This does not occur when the user requests it for personal reasons.

Important notes

  1. If the same issues occur with the user, the trust level is terminated in the following cases: “Spam”, “Account security at risk”, “Abusive messages or behavior”, “Report - when users request that their trust level be blocked”.
  2. Number of suspensions 4. This number of suspensions is based on the acceptable number of suspensions on Youtube. Youtube has the following process: “Warning”, “strike 1”, “strike 2”, “strike 3”. note: If a channel does not comply with Youtube policy, it receives these notifications, it has some period to explain the reason for the termination, suspension or freezing of the account. We can have this same measure with the trust level in Discourse. In the cases I mentioned above, it could make sense to have this number of 4 processes: “Warning”, “strike 1”, “strike 2”, “strike 3”. According to notices, the trust level can be suspended, frozen or terminated. If the trust level is frozen or suspended, there is a period of time when the trust level can return to normal.

It’s possible for users to request for their trust level to be locked, if the admin of that forum is happy to do that for them.

  • As you mentioned and sent the feedback: “users can request that their trust level be locked, if the admin of this forum is happy to do it for them.”, the novelty of the idea of this post would be to allow the user to request that their trust level is locked.

Discourse also has the option to Silence or Suspend a user’s account, if the admin feels it’s necessary. These options do have timers.

  • This is really interesting and I hadn’t seen it, thanks for talking about it, I’ll do more research and learn more about Discourse.
references

Leader is not a trust level you reach by using discourse.

So locking the trust level is not necessary to avoid becoming leader.

3 Likes

There is an Automation plugin that may give an option to set a time limit on a locked TL. But not sure

2 Likes

is there any way to lock trust levels for everyone?

1 Like

is there any way to lock trust levels for everyone?

  • Based on the feedback here, no. But hope it helps, my idea would be about this. In short, we would have these 2 processes:
  1. Users report other users who do not follow the community policy. The reported user receives a warning that they will have to explain whether this is true or not. If the user has a reason to argue that the locked trust level should not be locked, the trust level will return to normal. If the user has no reason to argue about the locked trust level, then the trust level will be frozen or suspended for a time that the administrator has already set for that specific user or set of users that have been reported.
  2. The user requests for some reason that the trust level be frozen or suspended.

Notes:

  1. In all these 2 cases, the system can freeze or suspend the trust level automatically. There would be no manual process in this regard and it doesn’t even make sense from what I commented in short - Because only the admin can confirm whether the trust level freeze or suspension request is fair or not based on the response of the user who wants this request to be revised again.
  2. If the user does not present any reason, in theory it is understood that the trust level should be frozen or suspended for a while.
  3. If the user requests that this situation be analyzed, it is only the administrator who evaluates this - at the end.
  4. If any request is denied from the admin, the admin should speak to the user and tell them the reason the request was denied.
    • The user can then submit a new request for review, and if the admin accepts this new request - the trust level suspension or freeze is cancelled. Otherwise, if the user has no arguments to do so, suspending or freezing the trust level is accepted.
  5. I believe these processes are done equally in things like Whatsapp, Twitter.
  6. The link I attached here talks about how whatsapp works - how it detects spam - Maybe Discourse has some idea or insight on this.

references

Maybe using groups and this plugin may help to accomplish part of it at least.

2 Likes

@Heliosurge This idea I found very interesting. From what I’ve read about the plugin there are some things that are already done.In my case, to solve my core problem, I would have to have something like:

Script name Plugin Description
Locking Trust Levels automation If a user posts too much, includes too many and/or inappropriate images, abuses the flag system, or similar an alternate to the above is locking the user to trust level 0. This will limit the number (and frequency) of topics and posts the user can create, as well as prevent them from including too many images/links and prevent the user from casting flags. Trust levels can be configured from the user’s Admin page.
Silenced the user automation Silenced users are prevented from creating new topics, posts, flags, or PMs on the site. They are still able to complete other actions, like “liking” posts, reading topics, replying to PMs, etc. Additionally, they can communicate with moderators via PM, so you can continue to communicate with them to try and address the behavior.
Suspended the user automation Suspended users are prevented from logging in, and thus from completing any actions on the forums. A suspension is the strongest possible recourse you have for a user and should be used sparingly. Like silencing, suspending a user is done from the user’s Admin page. Like silencing, suspensions are for a specific period of time. You may want to suspend the user for a short period of time first, and if the user returns and continues the behavior, increase the suspension time.
Trust level freeze trigger-automation In case of muted or suspended users the Locking Trust Levels trigger is triggered.

how the trust levels freeze trigger works

  1. Trust level freeze - trigger these events automatically:
    • Silenced the user - Warning
    • Suspended the user - strike 1
    • Locking trust level - strike 2

final solution

  1. In short it would be 4 plugins. Only the last plugin calls the other plugins, as seen in the following topic: how the trust levels freeze trigger works.
  2. Despite the cases of users being muted or suspended, the examples I mentioned above were done like Spam, Account Security at Risk, Abusive Messages or Behavior, Reporting - when users request that their trust level be blocked. There would still be the case of users who were silenced or suspended on a personal basis, that is, self-report:
    • “I request that the trust level be locked as I don’t want to be the leader of something”.
    • “I’m traveling and I would like to block my trust level, I don’t know if they will be able to hack my account in this period”.
    • “I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community”.
    • Note: This users who self-reported their account - they can request account suspension, account muting or even in the case I mentioned the blocked trust level.
      • If the user presents any proof that they should not be suspended, silenced or have their trust level protected, the account is normal.

Notes

  1. I did it a few times and thanks for the feedback, JammyDodger ;D I read this document and I think it’s pretty cool: https://meta.discourse.org/t/discourse-moderation-guide
  2. Dan DeMontmorency - What do you think of this idea, is it a good idea? Is it possible to create something like this?

new ideas

  1. A viable alternative may be this, this could be done with suspend-a-user-via-the-api ,
    silence-user-via-api, add-a-user-to-a-group-via-api, discourse-docs-api-org, auto-suspend-inactive-user, discourse_api_pull_121 - the only problem is that I still haven’t seen any information from the api about locking trust level.
  2. We may have plugins that communicate with api for this: silence-user-via-api, auto-suspend-inactive-user, “locking trust level”, suspend-a-user-via-the-api.
  3. My idea would be to have the trust security plugin - this plugin communicates with the following apis: silence-user-via-api, auto-suspend-inactive-user, “locking trust level”, suspend-a-user-via-the-api.
  4. My initial suggestion that it would be really cool to have these endpoints:
    • ${this.url}admin/users/${userId}/groups/report/spam
    • ${this.url}admin/users/${userId}/groups/report/lockingtrustlevel
    • ${this.url}admin/users/${userId}/groups/report/accountsecurityatrisk
    • ${this.url}admin/users/${userId}/groups/report/abusivemessagesorbehavior
    • ${this.url}admin/users/${userId}/groups/userwishthis/lockingtrustlevel
    • ${this.url}admin/users/${userId}/groups/userwishthis/accountsecurityatrisk
    • ${this.url}/admin/users/:user_id/report/spam
    • ${this.url}/admin/users/:user_id/report/lockingtrustlevel
    • ${this.url}/admin/users/:user_id/report/accountsecurityatrisk
    • ${this.url}/admin/users/:user_id/report/abusivemessagesorbehavior
    • ${this.url}/admin/users/:user_id/userwishthis/accountsecurityatrisk
    • ${this.url}/admin/users/:user_id/userwishthis/lockingtrustlevel

but something that already solves would be to do this poc - proof of concept

  • ${this.url}/admin/users/:user_id/silence
  • ${this.url}/admin/users/:user_id/lockingtrustlevel
  • ${this.url}admin/users/${userId}/groups/silence
  • ${this.url}admin/users/${userId}/groups/lockingtrustlevel
2 Likes

I think that could be achieved by tweaking the Trust Level requirements site-wide, so users would be set on one level at the start, and then
set unachievable thresholds so they could never progress. You can find a whole raft of settings for this in the Trust Levels section of your admin settings. Though Trust Levels are really useful, so you may want to consider what you’d lose out on first.

3 Likes

Automating parts of Moderation can be quite handy. You have a good layout for what you want to achieve. A small mod team of a large community this can be very useful aa long as the team regularly investigates/audits the system regularly.

2 Likes

My use of discourse is quite, different. Than other’s, it’s less of a community and more of an authentication provider. Along with it being a support site.

1 Like

another idea

image 1

image 2

case 1:
  1. There is a report button for any user in the community, this makes the system more autonomous
  2. When you click on report the user a modal appears, where you can choose the option to report
  3. When this happens, the reported user receives a message.
  4. And the report is seen by the moderator or admin.
  5. If the user does not respond to this denouncement message - trying in turn to say that “it is false” or “it does not proceed”, as happens on Youtube, the process is done and finished.
  6. the event is added: Locking Trust Levels
  7. If this happens again, the event is added: Silenced user.
  8. If this happens again, the event is added: Suspend user
notes
  1. The administrator or moderator will verify the complaint process of the user who reports another user.
  2. If the requesting user’s complaint process makes sense (the moderator or administrator) forwards this message to the user.
  3. If the user has any reason that explains that the complaint is false, the process is closed.
  4. The admin or moderator can check the reported user’s response and decide whether to archive or not.
case 2

image

Note: The user thinks the account has been hacked, so the user reports his own account.

final notes

All the plugins described above are for automating this process that I describe now with images

How does the whistleblowing process does it occur automatically?

  1. From community users to other community users, ie when one or more users report one or more users - this happens when one or more users clicked on the button: “report this user to the community” of one or more users
  2. When only users of the type: moderators, administrators or community leaders are called by one or more users to report a post - this is known as flag post
  3. By the user himself, when he feels he is being hacked, that is, when the user denounces himself, notifying the system that the account has been hacked - this happens when one or more users clicked on the button: “report this user to the community” and i.s. self reported
  4. Administrators, moderators or community lead users are called these cases:
    • flag post
    • user reported by community users
    • user reported his own account, as he thinks it was hacked
  5. In all these scenarios, the plugins I’ve described are required to automate this process.
  6. The automatic options for moderators, admins or community leaders to manage users are these:
    • Locking Trust Levels
    • Silenced user
    • Suspend user

Note: This can only be done, if there is a list of users to report.

Notes

  1. Generates a list of users that are reported
  2. With this list, we can do the following. Users who did not ask for a response - this is done at first
    • Locking Trust Levels
    • Silenced user
    • Suspend user
  3. Users who requested a reply
    • Administrators, moderators and community leaders are called
    • If the user’s reported response is accepted or valid, everything is archived
    • If the answer is not accepted, the user has one last chance to respond, and if they don’t, the process is done and done.
  4. We can see this list of most common and uncommon notification cases within Discourse. including the year, month, week and period that occurs (year to year, month to month, week to week) - that would be my initial idea
  5. In my opinion, the best way or the easiest way to do this - without harming the Discourse system modeling or even a practical way of doing it, would be to have a temporary database to receive reports from users, a database like mongodb could be used for this.
    1. I think of using mongodb to receive information from users that are reported
    2. mongodb works well - mongodb is a document-oriented database. This is interesting if you have a high volume of data and that data is not structured - famous nosql
    3. In addition, the database is not permanent it is temporary. mongodb supports temporary data types
    4. I thought of this idea when i read this: Running Discourse with a separate PostgreSQL server - I thought I’d have a separate mongodb database for this
    5. According to this page: GitHub - discourse/discourse: A platform for community discussion. Free, open, simple. , Discourse uses redis and postgresql. Have you ever thought about using mongodb for this case I described?

idea summary

“reported users - mongodb”

reports: {

report1:{
 user: "user001",
 linkFlagPost: "https://meta.discourse.org/t/post-test/1122344"
 reason: "flag post",
 reportedUser: "user002"
},

report2:{
 user: "user001",
 reportedUser: "user003",
 report: "spam"
},

report3:{
 user: "user001",
 reportedUser: "user003",
 reason: "abusive messages or behavior"
},

report4:{
 user: "user001",
 reportedUser: "user003",
 reason: "user discloses illegal user data"
},

report5:{
 user: "user001",
 reportedUser: "user002",
 reason: "user posts dubious links, links that contain viruses, malware"
},

report6:{
 user: "user001",
 user_report: "user002",
 reason: "specify another reason"
},

report7:{
 user: "user004",
 reportedUser: "user005",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community"
}

notes

use sha256 to anonymize the data of whoever sends the report and who receives it

reports: {

report1:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 linkFlagPost: "https://meta.discourse.org/t/post-test/1122344"
 reason: "flag post",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report2:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "spam",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report3:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "abusive messages or behavior",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report4:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "user discloses illegal user data",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report5:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "user posts dubious links, links that contain viruses, malware",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report6:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "specify another reason",
 status1: "filed process"
 status2: "Locking Trust Levels"
},

report7:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "a1dd6837f284625bdb1cb68f1dbc85c5dc4d8b05bae24c94ed5f55c477326ea2",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community",
status1: "filed process"
status2: "Locking Trust Levels"
},

report8:{
 user: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reportedUser: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
 reason: "I think my account has been hacked, I would like my trust level to be frozen until the case is investigated by the Discourse community", 
 reply: {
    reportedUser: "c23162ffc1a535af2ee09588469194816e60cb437e30d78c5617b5d3f1304d6a",
   reason: "View logs in account - if you confirm this process, we will lock the trust level", 
   replyFrom: "administrator",
   status1: "filed process"
   status2: "Locking Trust Levels"
   }
}

Some problems in this idea, solution
  1. It may be a wrong solution, I need the feedback from the Discourse community to know if the idea is valid or not.
  2. Implementing this can be complicated
2 Likes