Defining trust levels


(finid) #1

Continuing the discussion from Group private messaging:

[quote=“codinghorror, post:8, topic:3016, full:true”]I believe inviting others to PMs is a mod-only activity at the moment, however, we do want to unlock that as part of the trust level progression.

(Remember right now we have only two trust levels implemented: 0, new user, and 1, basic user.)[/quote]

So we have a “visitor” and “basic user” trust levels. I can understand “visitor.” At what point does that user become “basic user”?

And how do we begin to define higher trust levels? Is it going to be based on number of posts, replies to posts, a combo of both, or some other factor or factors?


What do user trust levels do?
Suggested Edits
A new trust level: The Helpful member?
(Adam Davis) #2

The first time they use a goto statement in their code. :laughing:


(Chris Sims) #3

Here is some basic information about the transition from new to basic:

As a site admin, this is also configurable in the Site Settings panel.


(F. Randall Farmer) #4

This design is underway, so it is an excellent time to make any suggestions you may have.

Presently, the thinking is that the visitor trust level provides a mechanism to limit access to some of the more subtle features of the software, and to protect users from damaging the discussions, either intentionally or accidentally.


(F. Randall Farmer) #5

These are placeholder values, but at least you get the idea…

So - besides “visitor”, “basic user”, and “moderator” what roles/levels do you think might be appropriate?

My next post is somewhat dated, but might be fodder for discussion…


(finid) #6

I think “visitor” and basic user" define a very wide range, which is good. At the mod level, we could define 2 or more (2 is best) levels of mods.

For example, Junior Mod could have the privy of doing X and Y, but not Z, while Senior Mod could do X, Y, and Z, stopping short at what the full admin can do.

Just ideas, my thinking could change as this discussion progresses.


(Adam Davis) #7

I like the idea that the more trusted a user is, the more functionality they are given.

  • Able to move posts from topic to topic, or create new topics to put offtopic ones into.
  • Able to edit thread titles and categories
  • Able to close topics
  • Interact with plugins in a “gamemaster” role
  • Access to basic site stats so they can work on community building

Basic groundwork to keep a forum moving smoothly. There’d have to be an audit trail, and a way for admins and moderators to undo some actions, and push users down a level or two and lock them from going up again if they aren’t acting appropriately.


(F. Randall Farmer) #8

[Fodder for discussion]

I wrote Habitat Anecdotes in 1988(!):

##The People##

The entire point of Habitat is The People. It is an interactive environment where people define the parameters of their experience. Chip likes to call it “A Social Crucible”: throw some people in a room with some fun toys, and see what happens. If a situation arises that requires modification, first let them try to sort it out --avoid changing the rules-- and if they can’t, take their input on how to change things. From this, it is clear that to understand Habitat, we must first understand its users.

There are basically 5 types of people in the Habitat universe:

  1. The Passive

Easily 50% of the number of users fall into this category, but they probably use only 20% of the connect time (rough estimates). They tend to “cross over” to Habitat only to read their mail, collect their 100t bonus, and read the weekly newspaper. They tend to show up for events ad-hoc and when the mood strikes. This is the most important area for development. Special events and activities need to target this “on for just a few minutes” group. This group must be lead by the hand to participate. They tend to want to “be entertained” with no effort, like watching TV. The trick here is to encourage active participation.

  1. The Active

This group is the next largest, and made up the bulk of the paying user-hours. The active user participates in 2-5 hours of activities a week. They tend to log into Habitat right after connecting. They send out ESP messages to others on-line to find out what is going on. They ALWAYS have a copy of the latest paper (and gripe if it comes out late). This group’s biggest problem is overspending. They really like Habitat, and lose track of the time spent “out there”. The watch word here is “be thrifty”. (See Quests for more on this)

  1. The Motivators

The real heroes of Habitat. The Motivators understand that Habitat is what they make of it. They set out to change it. They throw parties, start institutions, open businesses, run for office, start moral debates, become outlaws, and win contests. Motivators are worth their weight in gold. One motivator for every 50 Passive/Active users is wonderful. Nurture these people. (See Motivators & Caretakers at Work)

  1. The Caretakers

Usually already employees. The Caretakers are “mature” Motivators. They tend to help the new users, control personal conflicts, record bugs, suggest improvements, run their own contests, officiate at functions, and in general keep things running smoothly. There are far fewer of these than Motivators. Out of a Pilot group of about 400, we had 3. What you want to do with a Caretaker is groom him for Geek Godhood. (See Motivators & Caretakers at work)

  1. The Geek Gods (System Operators)

I was the first Oracle/Operator. (I talk about that experience in Geek Gods Revisited). The operator’s job is most important. It really is like being a Greek God from the ancient writings. The Oracle grants wishes and introduces new items/rules into the world. With one bold stroke of the keyboard, the operator can create/eliminate bank accounts, entire city blocks, or the family business. This is a difficult task as one must consider the repercussions of any “external” effects to the world. Think about this: Would you be mad at “God” if one day suddenly electricity didn’t work anymore? Habitat IS a world. As such, someone should run it that has experience in that area. I suggest at least 10 years experience in Fantasy Role Playing and 2 years on telecommunications networks (specifically ones with CHAT programs). A Geek God must understand both consistency in fictional worlds, and the people who inhabit it.

To optimize the Habitat funativity experience, the goal is to move the user from his/her present category to the next one up:

Passive → Active → Motivator → Caretaker → Geek God

Move everyone one role to the right, and you will have a successful, self maintaining system. (Read: you will make bags of money.)

In 2000, Amy Jo Kim would generalize these five categories in her book Community Building on the Web defining 5 steps of advancement in online communities:

But these observations are more than a dozen years old. Do they still hold? What else have we learned about online discussion that either changes this or amends it with more subtle understanding?

It seems that Novice and Lurker map, and that Elder and Moderator might - but what other roles do we think exist now?


(F. Randall Farmer) #9

Or, as I’ve been saying about online communities for decades:

Similar debates arose over “hate props” such as Nazi symbols. Should they be banned, or would that be a violation of freedom of speech? Even more slippery was the issue of foul language. Which tainted words like f*** should be automatically blanked out with astericks? Would adults be annoyed with such restrictions? A general rule of thumb followed the often quoted motto of Randy Farmer, a pioneer in GMUK development and an honorary wizard at Palace. "Push the power down."


(Chris Hanel) #10

Reading this, I would only make a slight alteration due to the granularity of tools available in Discourse: I think that for trust to mean something in an online community where the goal is to lessen the burden on moderators to keep things clean (without it being exchanged for the headache of cleaning up when the wrong people are at the controls), there needs to be two parallel areas: Trust as it pertains to good behavior, and trust as it pertains to your value in the community. I can forsee plenty of situations where I might trust someone to have moderator abilities in a more janitorial area (like duplicate posts, acting on flagged threads, moving threads), and not trust them to handle more subjective situations, such as closing a thread for being off-topic or against the rules, overseeing other users, and generally being asked to make judgement calls. Conversely, I could also see where the reverse would be true. Additionally, there’s a good amount of edge cases where someone meets many of the criteria for being a “trusted user” but lacks a key aspect that simple stat tracking might miss out on.

For example:

  • Someone that posts frequently and is friendly, but not knowledgeable enough to make judgement calls about whether a thread has reached a conclusion or not, or a question has been adequately answered
  • Someone that rarely posts, but is extremely popular and well-respected, and their content is always an epicenter of discussion
  • Someone who is a valuable member of the community, posts regularly, but has very conflicting ideas about how the forum should be managed and operated
  • Someone who explicitly goes into their participation with the goal of gaining trust as quickly as possible, and fine-tuning their content to do so

My instincts on this are to go into this with the following design criteria:

  • Whatever statistics are required to go up in trust should be invisible to the membership, and preferably not set at a static number, so that people can’t go back and reverse-engineer where the metrics are.
  • Moderators should always have the ability to promote or demote users regardless of stats, and other mods should be able to see whether someone earned their rank through an organic rise through the ranks, or placed there by a mod
  • If at all possible, trust levels should attempt to break away from the idea of being a strict vertical hierarchy, and reflect more of a progression closer to a management/creative dichotomy, where there are two basic tracks of progression that are separate without being completely independent of each other. This is especially important if trust levels have an effect on the visibility and placement of your posts in the Popular sorting of the forum.

Those are my quick thoughts, would definitely like to revisit this one later when I can devote more time to thinking about what a trust progression would look like in my perfect world- er, forum.


A new trust level: The Helpful member?
(finid) #11

Ok, from this, we can assign:

  1. Guest or Visitor (trust level 0) - passer-by. Not a registered user, that is, may be registered, but has not validated email. All others have registered and validated their registration.

  2. Noob (trust level 1)

  3. User (trust level 2)

  4. Junior Moderator (trust level 3)

  5. Senior Moderator (trust level 4)

  6. Admin (trust level 5)

That’s the easy part. The tricky part is assigning privileges to each trust levels. Who can post images, videos, upload text files or attachments. What can a J. Mod do. what can a S. Mod do, etc. At what level can we let a user to gain starting PM privies?


(F. Randall Farmer) #12

I’d go further. Defer setting the level numbers until after we’ve determined reasonable buckets of functionality to assign to each. Do the “easy” part last.

Don’t ask “what should a Jr. Mod do?” instead ask: What are the things we need people to do to facilitate high quality discussions?

We have a partial list already between the items listed in

and lists like [quote=“stienman, post:7, topic:3025”]

  • Able to move posts from topic to topic, or create new topics to put offtopic ones into.
  • Able to edit thread titles and categories
  • Able to close topics
  • Interact with plugins in a “gamemaster” role
  • Access to basic site stats so they can work on community building[/quote]

What else (please include from other forums, not just existing Discourse features…)


(F. Randall Farmer) #13

Agreed. All reputation is in context, and you’re right that there are different contexts when talking about content quality. I like to talk about as “The Good, the Bad, and the Ugly” - Up with the Good, Down with the Bad, and Out with the Ugly - and the trust/reputation score you use for flagging (Out with the Ugly) shouldn’t be confounded with good-citizen trust (Up with the Good.) {See more in the thread: So What Exactly Happens when you “Flag”? }

I like to think of myself in this category on most of the forums I’m in. It took me a long time to get trust on rpg.stackexchange.com - though I never thought it unfair or problematic.

Invisible - absolutely. Non-static is problematic as it becomes difficult for moderators to understand what’s happening if there aren’t clear thresholds. Let me suggest an alternative - that trust isn’t simply an ever-incresing accumulator - it decays over time and must be “kept up” in order to retain the rank. Disappear for 2 years (and your content is no longer relevant) and your trust rank falls.

From Building Web Reputaiton Systems:

…time leaches value from reputation, but there’s also the simple problem of ratings becoming stale over time as their target reputable entities change or become unfashionable. Businesses change ownership, technology becomes obsolete, cultural mores shift.

The key insight to dealing with this problem is to remember the expression “What did you do for me this week?” When you’re considering how your reputation system will display reputation and use it indirectly to modify the experience of users, remember to account for time value. A common method for compensating for time in reputation values is to apply a decay function: subtract value from the older reputations as time goes on, at a rate that is appropriate to the context. For example, digital camera ratings for resolution should probably lose half their weight every year, whereas restaurant reviews should only lose 10% of their value in the same interval.

This would need to be implemented as an override - not a bonus to trust. Note that, if full mod privs are granted by trust, this override feature is required to bootstrap a server.

Could you unpack this further? I’m not following and I like what you’ve presented so far…


(Chris Hanel) #14

Well, while not a 100% precise analogy, I look at it like a two-dimensional political spectrum.

Instead of the different axes being Social Issues and Economic issues, I look at a good forum system as having the X and Y representing the two main aspects of contributing to a forum community: Contribution through content, and contribution through administration.

On this grid, you can plot out roles similar to the four squares present in the political grid, only much more nuanced. Everyone starts off at 0,0 in the bottom left corner, and the admin sits at 1,1 in the top right. Everything in between is determined by your rating in the two axes, and as they rise and lower, it moves your placement on the grid from role to role like thus…

image

WARNING: NOT AN EXACT CHART, MADE HASTILY FOR MAKING A POINT ON THE INTERNET, A PASTIME THAT RANKS SLIGHTLY ABOVE POLITICAL PUNDITRY

With a system like this, someone who has no interest playing babysitter to their fellow members still feels the reward of contributing. Vice Versa, a person who wants to contribute through assisting with managing the community, but isn’t as experienced, can contribute through flagging and basic management and see the fruits of their labor. Some actions contribute to the X axis, some contribute to the Y axis, some to both (but not necessarily equally).

To further expand while I’m thinking about it:

  • If you want to get away from static numbers, then have the option to grade on the curve above a certain level of membership size, with built-in tenure. The top 2% of members on the scale presented here are given moderation, and it’s much harder to fall out of moderation than get in (to prevent a constantly wavering trust level for a member right on the border)
  • Should an admin desire, have a setting that informs the user and the admin that someone has earned a certain level of trust, but that promotion has to be approved by the admin first. Give the admin a chance to discuss the added responsibility with the user, or (for some communities) test them in whatever way might be proper, like a black belt exam.
  • Even more bold: Have trust levels, have roles, do everything here, but don’t have sets of features statically tied to each role. Instead, allow people to earn further and further singular features as they progress or sustain a high trust level over an amount of time. Continually reward participation by not handing over all the features to a given role right up front, especially for someone becoming a full moderator. That system would be more complex on the back-end, but my brain starts to race at the opportunity here. (and yes, I’m aware that this starts to drift towards the realm of ‘gamification’, but, I refuse to use that term out of principle.)
  • Allow people to earn an ability and title, but give them the option to turn down any added responsibility, in case it’s not their thing.
  • I could write pages and pages more on this, and probably will over the next couple days. It’s something I’ve spent far too much time thinking about.

User trust should be category specific
A new trust level: The Helpful member?
User trust should be category specific
Feature suggestion: Content quality levels/curated tab
A new trust level: The Helpful member?
(smarsh2008) #15

I’m intrigued by this discussion (followed here from Randy’s twitter feed) - if I may interject?

Is there a movememnt to the left possible? How does that work? In real relationships trust ebbs and flows, it doesn’t just ‘grow’ - but there may well be plateaus that make sense.

Also, how do you see real relationships between people in such a system (I am sorry, I think people keep forgetting about the contextuality of trust). That is, there is no single ‘trust level’ but a level of trust between people, not between people and system per se.

A little busy with a few things right now, hence a short interjection, but if you really wanted to build a system that was representative of ‘the world’ of Chris, you need to escape from the singular view of people that online systems have, and build it in terms of the multiple views of people that people have.


(Adam Davis) #16

That’s certainly an interesting perspective, but do we really want to build a social network with a complex relationship graph that knows where I stand on your specific graph, and where you stand on mine?

I think that would be too much granularity. In this system we really only need a very rough idea of the relationship between the community as a whole and a specific user. That user may be more or less trusted by individual members in the community, but the site should only need to know what the community thinks of this person on average, in order to give them more or less power and ability on the site.


(Chris Hanel) #17

I actually think that there’s a compromise to be found in these ideas that is useful without getting too insane.

Overall, I think your final level of reputation should be from the point of view of the community as a singular whole, but when it comes to a user browsing the forum, there could definitely be personal adjustments based on your own actions, such as a thread being more visible to you because the participants are those who you’ve liked and responded to on more occasions.

However, to be my own devil’s advocate, I generally disagree with going too far down that rabbit hole, as content systems that attempt to tailor and prioritize content at that deep of a level start to foster the problem of users not being exposed to ideas that are different or challenging to their own, and personal echo chambers start to form, which is more damaging to the community than anything else. (See this TED Talk for a great analysis of the problem.)


(Adam Davis) #18

ONEBOX’D!!!


(F. Randall Farmer) #19

Lots to talk about here… :smile:

Absolutely. All reputation is contextual. What we’re poking at here is what are the reasonable common contexts for forums.

Please, interject more @smarsh2008!

If we need to go all graphy, we should. But I agree, I’m not at all convinced that we’ve identified a need for that just yet. Let’s tease apart some cross-forum or cross-category reputations first.

But @ChrisHanel raised some detailed points (at my request) and we should look at each one.

Though I’m not sure I agree with the graph showing these two values as the X and Y axis, I think there is much good to mine from this basic definition.

There are at least two good wide-ranging karma scores we could calculate. I’ll name them formally to expedite discussion:

  1. AuthorContentQuality [ACQ] which derived from ItemContentQuality [ICQ] which derived from Evaluation Statements, such as Like, Favorite, Flag, Bookmark, Reply, etc.
  2. AuthorEvaluationQuality [AEQ] which is derived from either/both of Moderator Verification Statements or Peer Agreement

In fact, these are very similar to the core scores in many social media content quality reputation systems I’ve helped design/build. You can read in great deal about one here: Case Study: Yahoo! Answers Community Content Moderation [Building Web Reputation Systems]

One could go further and suggest that there exists a function (f) where:

f (AuthorContentQuality, AuthorEvaluationQuality) = ContributorTrust (CT) <-This creates a 2d graph…

But I’m not sure that we should use the combo score (CT) to assign all features/rolls. If we have multiple contexts, why not break up the roles/features along those lines? If you have a high AEQ we consider giving you more moderator tools. If you have a high ACQ - more content creation/publication tools. What would be the purpose of assigning role-descriptive text-names to each (ACQ,AEQ) sector - it just leads to confusion: “Why can’t Scholars do foo? Why can’t Jr. Moderators do bar?”

And - what are the other contexts that are reasonable for the various uses of Discourse? There are certainly arguments to be made for per-category trust scores - especially if a forum has very differing “rules” per category (SomethingAwful and Penny Arcade come to mind.) It seems that someday having per-category permissions will make sense. [I’m not rushing to add those features just yet, just wanting to make sure that we don’t design ourselves into a box.]


On to Chris’ bullets - they all deserve responses:

Static levels don’t have to scale linearly. Also, I’ve already mentioned decay elsewhere: [quote=“frandallfarmer, post:13, topic:3025”]
trust isn’t simply an ever-increasing accumulator - it decays over time and must be “kept up” in order to retain the rank.
[/quote]

Also, putting in a latch to prevent a users capabilities from wavering back in forth is technically trivial. Note that this is an edge case that doesn’t happen often, and the user should be informed whenever (s)he’s demoted and what to do about it. In short, if you care it won’t happen. If you don’t care, well - it doesn’t matter. :wink:

What about this instead? The cases are 1) Features are granted automatically (default), 2) Features are granted by Moderator fiat only. In both cases, the forum operator may opt-in to be notified whenever a user crosses a major threshold. That covers all the cases I can think of (preemptive, reactive, locked-down, and laissez-faire.)

Why have role-titles if they don’t do anything or aren’t clear? As to fine-grained feature grants on a per-user basis - I’ve seen up-close that tried at places like Answers.com. It was an administrative (and interface) nightmare - especially when you have administrator per-feature per-user override.

Again - confusion around titles - either they mean something (to everyone) or they don’t. If I see you are a “Moderator” - believe me, I have some expectations that you will act, and quickly, when you see a goatse post - abdication of authority/responsibility is not an option. Not acting in that case should lower your AEQ significantly and cause you to lose the power/title. Not to mention the possible side-effects should some content dispute erupting in the real-life “legal” arena from inaction.

Thanks for doing so! I’m sure others have similar thoughts and I know I’ve had this conversation with other companies when they are setting up their reputation systems. Honestly, you are already quite a few steps ahead of many of them in thinking…


A new trust level: The Helpful member?
(Bryce Glass) #20

From the top of my head (and, per @frandallfarmer’s direction, not limited to existing features… Privileges related to…

My own identity

  • ability to change my username (or change it more than 1x w/in a time
    period)
  • ability to change my avatar / icon (this, in combo w/the above, are identity-spoofing attack vectors)

Affect the data/metadata of a conversation

  • ability to ‘tag’ a conversation (if different than aforementioned categorization)
  • ability to add photos / videos / media to a post
  • from ‘trusted’ sources (publishers that the community has whitelisted)
  • vs. ‘non-trusted’ sources (free-form, anything goes)
  • Ability to mark a thread as duplicate / related to some other thread

Impact someone else’s karma, for good or for bad

  • ability to invite others to a conversation / vouch for invitees
  • ability to mod someone’s response up / down (‘Was this helpful, yes or no?’)
  • ability to report abuse
  • on a post
  • on a user (identity-spoofing, repeated abuses, etc.)
  • ability to ‘ignore’ (killfile) other users?

Access to deeper statistics