Thoughts from the Elm community on Intentional Communication


(Jeff Atwood) #1

Fantastic talk by Evan Czaplicki from the Elm language community based on their experience on reddit and on their own Discourse instance:

I do wish there had been a bit more time for the end of the talk with the concrete feature suggestions…

… but overall this is a great talk and I recommend it if you’re interested in the community category here on meta :wink:

My main piece of advice for Evan is to abandon any significant investment of time in the /r/elm Reddit instance. Reddit is good for “what’s hot right now” and “breaking news” but sadly terrible to the point of being actively harmful at literally everything else involving human discussion.

I can’t emphasize this enough. It’s not that Reddit is bad (although it kinda is) but it’s such a negative return on emotional labor that I’d think twice before spending any time there at all beyond the absolute minimum necessary to have a presence.


(Evan Czaplicki) #2

Thanks for sharing the talk here!

Missing Stuff

In retrospect I wish I spent more time on these suggestions in spite of messing up the timing overall.

One important thing that I forgot to mention is about moderation. I am hopeful that labeling intent on each post could make moderation feel much less arbitrary. Rather than referring to a set of abstract rules, a moderator can base decisions on the actual stated intent of the poster. “The intent is learn but this post is doing X, Y, Z instead.” I don’t really know if that’ll make moderation fully viable in practice, but it seems like it’d be at least more viable.

The Rate Limits part shared above is based on experiences from our Discourse instance where we definitely have some posters that crowd out other perspectives with their speed and volume. The post length idea in Draft Hints in sort of relevant to that. Both of those apply to anything where the visual presentation is single-threaded though. We had the same issues with our mailing lists as well, but probably a bit more extreme. (Time limits is more for cases where people are already angry, and expectations is more about online places for work like GitHub.)

I think this stuff is all “cherry on top” kind of ideas though. To me, the root problem is about different people having different value systems (great!) and then creating systems where they collide via big blank boxes (risky!) Having communities define what “intents” are possible within their community is the best idea I have had there, partly because it can be a bridge between different norms. Rather than learning “that’s not what we do here” by posting 20 times and getting weird reactions, maybe the design of the forum communicates these values immediately by saying “which of these N things do you intend to do?”

Motivation

I felt like designs for online discussion kind of got stuck between the engagement and freedom lineages, and I wanted to start talking about a third way. So my goal was not to talk good or bad about any particular existing discussion forum, and I wasn’t imagining the audience being the folks who actually work on this stuff! (I hope it doesn’t feel aggressive or anything from this perspective!) Ultimately I just hope the ideas and references are interesting, and I am very keen to explore them in practice!


Automated draft hints
Rate-limit first-responders in specific categories
(Jeff Atwood) #3

Not at all – but I want to reiterate how dangerous it is to deal with Reddit for any kind of serious community engagement. It’s like playing with a bag of razor blades and sharp knives; only a matter of time until major injury results.

(I’ll stop harping on this now, but in the interests of avoiding pain and suffering for yourself and your community, that’s step zero.)


(Dave McClure) #5

Watched the talk and threw together these references along the way from within:

Some of the posts referenced in the talk:

Elm community discussions

Other references


(Erlend Sogge Heggen) #6

Big fan of your holistic take on open source development Evan, it reminds me a lot of the Rust community which I hold in very high regard.

Would love to hear more about exactly how you envision such a system working within Discourse. I’m playing with it in my head and I’m having a hard time making it work without forcing a tad too much busy-work upon the poster. Most dev communities I’ve looked at, Meta included, deals with intent reasonably well simply by proper naming and description of its categories. A few take it one step further by using topic templates.

Maybe there’s more to be gained by recursively requesting the most problematic topics to abide by certain standards. Don’t hesitate to close flame-bait topics down – ask them instead to rewrite their post according to certain best practice principles, or post it elsewhere. discourse.elm-lang.org is your house after all so you shouldn’t be afraid to enforce pretty strict rules of civilized discourse.


One very interesting subject you touched on was the value of engagement with regards to particularly inflamed discussions. Similar to how so much of news media has resorted to prioritising divisive and shocking news reporting to maximise their ad impressions (i.e. max engagement), certain community members are very good at starting controversial discussions in the name of constructive feedback, which can quickly suck up a lot of the oxygen in a community. Discourse is already somewhat opinionated on what a successful community looks like, but we can do more to gently push community owners into the pit of success. Two things that immediately come to mind:

Engagement by itself is worthless

Most Discourse forums are not ad-based, so high engagement in an unconstructive debate is in-fact a negative-sum game. The absence of the “any traffic is good traffic”-incentive is something we can use to our advantage and we should be mindful of this in our user documentation and dashboard statistics.

How to curtail negative-sum engagement

Much easier said than done, but I think it’s well worth pondering. Discourse has lots of micro-optimisations already in place for this purpose (e.g. not allowing downvotes, rate-limiting posters who are dominating a topic etc.) and your suggestions could lead to further improvements. I’m gonna create #feature stubs for your suggestions so far so that the community can help us flesh them out.


Lastly, did you see a big difference in tone and overall attitude between your Discourse and Reddit community? I see that in your communications you prioritise your Slack chat and Discourse forum over Reddit (e.g. on your community page it’s just listed for the purpose of “discuss blog posts”) which I think is the right way to go, but I wonder if if could be taken even further. I’ll expand if we take this discussion further.


(Daniel Hollas) #7

OMG, this! :arrow_double_up: For those not having time to look at the talk, I highly recommend at least reading this post. It is one of these cases where you have something on your mind, and then someone writes about it and it suddenly all makes sense. :slight_smile: Thanks @evancz for taking time to write that. I found it highly relevant even outside the Open Source community.


(Evan Czaplicki) #8

Thank you for taking a look at these resources! It is really exciting to see other people thinking about the same problems! :smiley:

Some comments stuck out to me that I want to mention.

I wouldn’t say that the problem is actually that the discourse is not civil. One of the things I did not have time to talk about explicitly is asymmetric time costs. Say there is a decision that is the result of a very complex balance of concerns. Person X will say “I think this is bad” and someone in the community may spend an hour writing up the complex situation. Person X does not find this satisfying, so they just keep saying “I think this is bad” any time it is vaguely relevant. The cost of this self-expression is that lots of other posters end up reacting. Everyone is civil enough, but there is no “agree to disagree” that ejects everyone from the loop. So an asymmetric time cost is when one person can invest a small amount of time to spend a great deal of other people’s time without breaking any rules. Often these cases have no natural limit.

We experimented with banning cases like this, and we definitely paid for that. I think six people got banned total, and it creates some very strange narratives about “they don’t want to hear disagreement” which I think is misdiagnosing a complex situation. (Another asymmetric time cost conversation!) So I hear you about not being afraid about this, but I just want to point out that it has a notable cost as well.

I think (1) the work instead goes to the moderators, except now it takes much longer overall and is perceived as adversarial, and (2) everyone else pays while things are unresolved. Moderators are very rarely talking to people who are easy to communicate with online, so “hey, can you try to X because of Y?” is heard in a very different way, and the interaction can end up significantly disrupting that moderators day in terms of emotions and productivity.

So I am personally okay if such a design “reduces engagement” with the forum. On some level, it is trimming down the infinite conversation possibilities to a smaller infinity, so one could reasonably expect less conversation. I could also see how conversations generally being more constructive could attract people who currently don’t feel comfortable participating, increasing the overall level.

Ultimately though, my hope is that having structured conversations wouldn’t feel like a burden for people.


(Michael Howell) #9

Dude, you just reinvented Stack Overflow. And Stack is kind of infamous.

4. Sadness after being told to act more like a robot

As another sign of its inhumanity, Stack Overflow discourages greetings and thanks.

  • “just tried to write an answer on stack overflow, it’s a horrible experience, but what really surprised me is that they edited my answer and removed the “compassionate parts”… don’e read this thread if you want to stay positive today”
  • “A user with a mere 4,000 reputation edited the tags on my first question and took the opportunity to remove me saying ‘thanks’…That may seem like a tiny thing to some people, but I found it immensely offputting that a stranger was bothered enough by two words of common politeness to silently remove them from my post.”

Robots may not have use for these words, but humans use them to make others feel welcome and appreciated.

I’m not at all convinced that locking people into a finite set of “intents” (in Stack’s case, questions, answers, and clarification-requesting comments, just like your “Learning intent”) is the answer. Partly because it comes across as robotic and neurotic, and partly because people simply lie about their intent, and partly because people find ways to be horrible within the system (snarky answers, etc).


(Jeff Atwood) #10

We tend to close topics like that. I’ve seen @sam used the timed close function (where a topic will be closed for {x} hours or days, then automatically reopen) when there’s a long, thoughtful writeup of the pros and cons but a lot of knee-jerk “this is still bad” replies.

I find that this tends to be topic-specific, so temporarily closing those particular problem topics tends to solve the problem. If it doesn’t solve the problem, because the person is doing this in many or all topics they participate in … that violates multiple guidelines I defined in What If We Could Weaponize Empathy? such as

  • Endless Contrarianism
  • Axe-Grinding
  • Griefing
  • Persistent Negativity
  • Ranting
  • Grudges

If one or many of these criteria is met, for a sustained period of time, that’s ample grounds for suspension.

Feel free to cite those rules in public when suspending users. We have suspension reasons that can be applied to accounts (and are in fact required for a suspension). Suspensions are always timed in Discourse, so you can also leave the door open to a person reforming later… but the emotional labor of reforming is on them, not you.

The goal of Discourse is to amortize effort across the community whenever possible … not to concentrate the moderation (and emotional labor) load on staff.

In general when you talk about Intentional Communication, to me that means specialized software. For example, Stack Overflow has very strict (and necessarily so) norms, as it is learning focused Q&A, where the goal is to create a useful shared artifact for future programmers, not “answer my computer programming tech support question right now.”

I can tell you that for many people they do feel that the style of communication on Stack Overflow is a burden. But what they can’t deny is that the results of that structured communication results in very effective search artifacts for future visitors… which of course, is the whole point of those interactions!

(Heh, I just noticed @notriddle posted basically the same thing while I was composing this reply. Do note that the “complaints” in that article are largely misunderstandings of what Stack Overflow is designed to do, and who it’s designed for. Imagine the sheer utter inhumanity of a world where Wikipedia articles don’t start with Hello and end with Thank You Very Much!)

Since Discourse isn’t a specialized communication tool, but a tool for general purpose communication in a variety of disparate communities, there’s a limit to what can be done here.


(Michael Howell) #11

Of course, you spun it positively, I spun it negatively, but as far as actual facts go, we completely agree: the intentional communication model cannot be the primary means of community-forming. There has to be an unstructured communication channel, warts and all, and if you don’t provide one, people will make one, even if it involves misusing a specialized piece of software.

(which is why Wikipedia not only has areas for general discussion, but features the Talk links at the top of every article, near to the edit button, where would-be editors can’t miss it)

And, my final edit, in which withoutboats from the Rust core team calls for less talk of concrete feature suggestions.


(Evan Czaplicki) #12

Thanks for the advice on asymmetric time costs! I’ll have our moderators read that.

In my talk, I was trying to say that the “self-expression <=> self-expression” flow is the general purpose flow. That is what all forums use now basically. On top of that, different communities could create more structured flows for their particular needs. So it seems like people are objecting to “all structured flows with no release valves” but that is a rather extreme interpretation of the core idea.

Thanks again for the notes on what ya’ll moderate. I appreciate it!