I think your last sentence is really the important point: a centralized ‘trust’ level would have to be computed from trusted sources, not just any Discourse that connects to the mothership. Otherwise a user can setup their own instance, do whatever is needed to get their trust level moved up, then create an account on their target site and immediately have a trust level they should not have.
I think the current system is well designed and fits the stated purpose; a user’s trust level is not just a measure of that user’s intent/ability/etc., but should also take into account the subject matter of the site that is applying it. Having a high trust level on a motorcycle maintenance forum doesn’t imply you should have a high trust level on a Rails programming forum.
It’s very difficult to account for the subject matter. I have been a part of several communities where the reason a person was there began as one subject matter and then developed into an entirely different subject matter. There are also forums that have a wide variety of topics that would be too difficult to not take many metrics and generally invade on some privacy. The metrics to see if a person is generally malice or productive should perhaps be important only for the person’s entry level into a discourse configured to except the mothership’s metric.
I’m not so sure it’s a niche case. I do this too, but I take it a step further. I don’t even sign in to forums that I have an account for unless I intend to post.
Fair point, but the reading barrier from trust level 0 to 1 is very, very low – 5 topics, 50 replies by default with a total read time of 15 minutes.
It is intentionally easy to achieve, but not so easy that you can do it trivially right after signing up and then turn around and engage in shenanigans with a bunch of sockpuppet accounts…
My problem is not with the concept of requiring a user to read before posting (a ‘reading barrier’, as you call it) - I think this is fundamentally a good idea. What I am worried about is that if you require a user to be logged in just in order to obtain ‘reading credit’ than you are asking users to change their behaviour to suit Discourse. (I am assuming that there are other users like me who often read without being logged on or even before creating an account on a forum.)
Can ‘reading credit’ (strictly, trust level increases) only be obtained while logged into a Discourse forum, or is there some way in which it could be tracked for an anonymous user (e.g. in a cookie or some other client-side storage) and then ‘applied’ to the user’s account when the user logged in (or even created their account for the first time)? If you could do anonymous user trust level increases, it would solve my hypothetical problem.
I like this idea, anonymous trust-level information would also be useful for forum admins to evaluate their anonymous audience and see ratios of lurkers vs. one-off drop-ins.
It certainly is possible to accumulate “logged-out” trust. All major websites do it for various reasons, usually having to do with anti-spam and behavioral targeting. When I was working with Yahoo! Answers, we used it as small component of our “Inferred Karma” component of trust: See: Case Study: Yahoo! Answers Community Content Moderation [Building Web Reputation Systems]
It is worth noting, that we added this element to Y!Answsers Karma (aka trust) only after we had finished the rest of the basic design. It should never be a large portion of trust - just a little “bootstrap.”
But, I do think this idea has some real teeth for Discourse - if it’s tied to moving the user up the trust chain. Let me suggest:
Browser-instances are given unique keys and a bit of server-managed persistent storage (in the old days we called this a b-cookie, but the tech doesn’t matter.)
Until then, the browser-instance-id accumulates reading trust scores, only until they reach the requirements for level
Once any that browser-instance logs in (or creates an account), that browser is marked to never collect browser-specific trust described below. (This prevents lots of problems.)
If the user first logs in and is at level 0, the reading trust scores are transferred to the account, but must not exceed the maximum requirement. (All we’re doing is trying to get the user to level 1, not do accounting…)
If the browser-instance reaches the requirements for Trust Level 1, but has never logged in, a dialog appears inviting the user to join the conversation by either creating an account or logging in with an existing account.
Step 4 (and tracking non-logged in reading metrics) are the only real reasons to do this fairly complicated thing. There are a lot of complications in treating a browser as a user - and we don’t get a lot of utility out of debugging the cases. Even as described, I’m not sure how often this case will come up - but I’m sure it will be helpful for some larger sites.
I see a few issues with this approach. Mainly that I could for instance just load some pages and scroll through them for reading credit. Also there are ways to get around those limits (presumably) like linking in replies to themselves. Also how does reading time get counted? If I left a webpage open in the background would it add more time or stop eventually? Would it warn me before stopping? (bad idea IMHO) How would that limit be set? The ideal way would probably involve taking a screen full of text at the average reading speed and record that as the time spent; the ideal way of actual implementation would be to call the display resolution and multiply that by some figured out constant from that.
I think that there should also be a way for my trust to transfer to other sites like How To Geek or Discouse.org if you start on one and go to the other some of my trust should transfer. (maybe half of it up to a total of level 1 or even just level 1+ transfers to level 1)
The code has some sense of dwell time vs. scroll time built in. But, it doesn’t have to be perfect (which is an impossible task anyway).
As my security-sensei used to say about abuse mitigation: “What is your threat model? What do you have to lose if someone circumvents the system?” - In this case, going from level 0 to 1 grants pretty trivial powers (adding more images and links to posts, adding the right to vote and flag, etc.) Anyone who wants to pretend to read enough to pass the trust-level-1 test should be trust level 1. If they abuse the system, other user’s flagging them will be sufficient.
Absolutely not. As your trust level increases in a community-specific context, you gain more in-context authority (powers) and responsibility. That should NOT transfer to another context. There are forums that are complete opposites of each other, and it is not appropriate to grant a John Birch Society member automatic rights (even level 1) on a Karl Marx Fanclub site (or vice-versa.)
I’ll be honest I read the beginning but started skimming after like 10 posts
The point I made is pretty moot for 0-1 but the requirements for higher trust levels also allow for this to be used. Someone could easily setup a farming operation to quickly get level 2 accounts then PM spam everyone.
Trust level to a certain extent is topic dependent but if me scrolling through 50 posts earns enough trust to have level 1 rights then why can’t I, as a trusted user somewhere else, be given that basic level of trust. Certainly using any other site qualifies me to use this one. Granted, it wouldn’t be good for opposing viewpoint sites in most cases, but at the same time if I’m an expert at Keynesian (supply-side) economics, I have knowledge (probably more than many people) on the topic of demand-side economics. The real issue would be something like Beiber-nation trust transferring over to a quantum mechanics site. Again though, the idea is to get people the core functionality after they’ve proven they know what they’re doing and have proven they are not a spam-bot They could always be flagged and demoted if this proves to be false, but this is more than likely an exception rather than a rule.
If the scroll demons can gain level 1, why can’t the expert from someothersite.com? It’s highly illogical to argue that security doesn’t matter for preventing new users from scrolling before serious spamming while not allowing experts to take advantage of links, pictures, PMs, flag posts, etc. Either level 1 isn’t a high enough level to warrant serious protection or it is.
I don’t think so. That requires user participation that is approved (liked, not flagged, etc.) by the community - not something easily “bottable”. The trust system Discourse is building in resistance to such attacks.
Besides, is PM spamming an actual attack - or are you just postulating a possible attack? I hadn’t heard of this (or been the target of such) until you mention it here.
Your two statements are mutually inconsistent - which is it? Is trust context (you say ‘topic’) dependent, or is it universal?
There is a separate topic on Per-Discourse-Topic trust:
Knowing supply-side economics has nothing to do with your ability to identify the hottest new Reggae bands.
Can you provide a successful example of this level of trust granted across multiple forums/communities anywhere on the internet? Perhaps then I can see what you mean by trust.
I remain confused by two seemingly contradicting things in your post:
Fear of botting to generate trust
Sharing trust across sites
Surely, you can see how sharing trust across sites makes easier to circumvent built-in defenses against spammers. I just create a site with this open-source software that automatically gives me trust level X and then login to the Discourse sites I want to atttack. Whamm Bamm, Here’s My Spam! Delete my account and I’ll be right back!
I looked at the post here on trust levels, and see it is quite old. @codinghorror - do you think we should put something a little more up-to-date up here on Meta? It might help with some misconceptions…
I can say that from my own personal perspective, that I will never progress further than trust 0/1. I will lurk a lot and read posts, but rarely reply, the reason being is that others will have made the points/positions I have earlier or clearer than I could, occasionally I may be able to throw something into the mix, but I have found that by the time I’ve composed a post to a thread its moved on.
I’m not sure that I’m in favour of x number of posts allows one to gain moderator level - I’m sure I read somewhere that there will be an element of peer review and communal consensus. Surely that would remove the botting aspect of gaining trust. Also, again from my own perspective, if I was to get to the level 3 moderator level - I probably wouldn’t want to do it, even if my peers pushed for me to do so. Why? I wouldn’t have the time or inclination to do so.
One of the things I have often thought about on other forum sites is what makes someone want to be a moderator, its a bit like wanting to become a politician. Fair enough many people work hard to gain those positions - but like a friend of mine who strived hard to become a MP in the UK, he never got to office because he never got voted in - came second in the Constituency election.
I think sharing trust across sites is a good thing, but has to be done with caution - how that would be done I don’t know, I guess the peers in the new site I’m bringing my “trust” to would make up their minds in there own good time. Plus, I would have to start the process of gaining trust in the new site is exactly the same way as the old one. Maybe a ratio or split could be devised Site1:T100/Site2:T45/Site3:T300 - that would possibly end up being very complicated as to the number of variables and factors that would go into it.
So after my ot ramble here’s my 2p-worth on the initial question. Yes Discourse should require reading to gain initial trust over a probationary period, so that it shows on my account that I’ve taken some initial interest in the discussions going on.
I was pointing out the inconsistency of gaining trust to level 1. If it’s fair to allow me to just scroll my way there why isn’t proven trust elsewhere applicable. Of course, if it is that important that trust not transfer then likewise I shouldn’t be able to scroll to level 1.
The PM spam is just something I thought of based on the possibility of scrolling to level 2; probably not a very serious risk.
If I am recognized as an expert at HTG, why can’t I use all the core functionality at this site discussing the platform itself.
I’m not saying trust should be universal, except on some level where level 1 is somewhat transferable given the relaxed security on it and lack of real power. I understand the argument about allowing trust to automatically transfer, but again given level 1 is so easily attainable, why not just automatically give it to someone who is level 2. Maybe to prevent the hypothetical setting up site attack, put requirements on those forums to require a certain level of activity/involvement. Ideally this would create some sort of overall moderators who would review and add forums to a list of approved in terms of trustworthiness as a site. This would also be a feature that should be able to be disabled or even setup such that the target forum could whitelist or blacklist specific other forums for trust to transfer over to level 1. ex Windows 8 site whitelisting Windows 7, Windows RT, Windows Vista, etc, sites allowing for flagged level 1 status from any other site except Windows Sucks, Anything But Windows, Microsoft is Terrible, etc
Actually the best way would be to set up something that optionally collects statistics (site name, URL, # of posts, # of active users, etc) creating a master list for other sites to greylist (flagged level 1) or even whitelist sites for trust transfer
Of course most of what I’ve said really should be put in that other thread…
I think the difference of opinion here is caused by different interpretations of “trust”. Let’s divide users into three groups:
1: Undesirable users (spammers, bots, trolls, vindictive, offensive, etc.).
2: Well-intentioned humans. Good/honest real people with an unknown level of expertise in the forum’s subject.
3: Users who have proved themselves knowledgeable in the subject of the forum and a valuable addition to the community.
The people saying “trust should be transferable” are probably thinking of trust as deciding if someone is a 1 or 2 from the list above.
Whereas the intention of trust-level by the forum-software developers seems to be to decide if someone is a 2 or a 3 from the list above.
So it seems to me that the intention of the developers here is that you assume everyone is a 2 and have separate anti-undesirable mechanisms in place to deal with the 1s, and a “trust” component to deal with knowing who can be a 3 and do more things. Given this, do the people saying “trust should be transferable” still think this way?
In my book on Building Reputation Systems, we seperate those kinds of trust (You are Not Evil from You are a good contributor.) I like to call it TOS vs. QOS or Terms of Service violations versus Quality of Service goals.
Spammers and the worst trolls (TOS) are dealt with primarily via flagging.
Finding the best contributors (QOS) is what we use trust for. Different cases entirely.
Note that level 0 users can post - they are just limited on what the post can contain.
This makes a lot of sense to me. It seems to capture the most users. Edge cases are always present, so spending less time perfecting the un-perfectable is a plus. Especially when you can be perfecting the un-perfectable on something more important.
I am really doubtful that (the vast majority of) users will log in and ever be upset that they didn’t get all their credit for reading before logging in. I read a fair amount on this forum before getting an account.
I think it may be a good idea to let users know that once they have an account, best practice would be to log in whenever they are visiting the forum, and briefly explain why. Maybe. It would be at the cost of inundating them with information, I suppose.
This may exist (and won’t I feel silly), but possibly have a system introduction separate from the obligatory “Welcome” message. Something that the curious user could find in their profile page or something that explains how the system works, leaving the moderator’s Welcome message as specific to the given forum.
While I might be an edge case, I just ran up against this.
I noticed extra "+ Reply as new Topic"s appear while I was searching for how this (advertised) feature worked and suddenly had topics open both with and without them.
In further searching I did find Understanding Discourse Trust Levels but some indication that there was a change to what I could do would have helped me, particularly because it’s introducing a nuance of Discourse compared to the old forum model. PM seems ideal.