Thoughts from the Elm community on Intentional Communication

Fantastic talk by Evan Czaplicki from the Elm language community based on their experience on reddit and on their own Discourse instance:

I do wish there had been a bit more time for the end of the talk with the concrete feature suggestions…

… but overall this is a great talk and I recommend it if you’re interested in the community category here on meta :wink:

My main piece of advice for Evan is to abandon any significant investment of time in the /r/elm Reddit instance. Reddit is good for “what’s hot right now” and “breaking news” but sadly terrible to the point of being actively harmful at literally everything else involving human discussion.

I can’t emphasize this enough. It’s not that Reddit is bad (although it kinda is) but it’s such a negative return on emotional labor that I’d think twice before spending any time there at all beyond the absolute minimum necessary to have a presence.


Thanks for sharing the talk here!

Missing Stuff

In retrospect I wish I spent more time on these suggestions in spite of messing up the timing overall.

One important thing that I forgot to mention is about moderation. I am hopeful that labeling intent on each post could make moderation feel much less arbitrary. Rather than referring to a set of abstract rules, a moderator can base decisions on the actual stated intent of the poster. “The intent is learn but this post is doing X, Y, Z instead.” I don’t really know if that’ll make moderation fully viable in practice, but it seems like it’d be at least more viable.

The Rate Limits part shared above is based on experiences from our Discourse instance where we definitely have some posters that crowd out other perspectives with their speed and volume. The post length idea in Draft Hints in sort of relevant to that. Both of those apply to anything where the visual presentation is single-threaded though. We had the same issues with our mailing lists as well, but probably a bit more extreme. (Time limits is more for cases where people are already angry, and expectations is more about online places for work like GitHub.)

I think this stuff is all “cherry on top” kind of ideas though. To me, the root problem is about different people having different value systems (great!) and then creating systems where they collide via big blank boxes (risky!) Having communities define what “intents” are possible within their community is the best idea I have had there, partly because it can be a bridge between different norms. Rather than learning “that’s not what we do here” by posting 20 times and getting weird reactions, maybe the design of the forum communicates these values immediately by saying “which of these N things do you intend to do?”


I felt like designs for online discussion kind of got stuck between the engagement and freedom lineages, and I wanted to start talking about a third way. So my goal was not to talk good or bad about any particular existing discussion forum, and I wasn’t imagining the audience being the folks who actually work on this stuff! (I hope it doesn’t feel aggressive or anything from this perspective!) Ultimately I just hope the ideas and references are interesting, and I am very keen to explore them in practice!


Not at all – but I want to reiterate how dangerous it is to deal with Reddit for any kind of serious community engagement. It’s like playing with a bag of razor blades and sharp knives; only a matter of time until major injury results.

(I’ll stop harping on this now, but in the interests of avoiding pain and suffering for yourself and your community, that’s step zero.)


Watched the talk and threw together these references along the way from within:

Some of the posts referenced in the talk:

Elm community discussions

Other references


Big fan of your holistic take on open source development Evan, it reminds me a lot of the Rust community which I hold in very high regard.

Would love to hear more about exactly how you envision such a system working within Discourse. I’m playing with it in my head and I’m having a hard time making it work without forcing a tad too much busy-work upon the poster. Most dev communities I’ve looked at, Meta included, deals with intent reasonably well simply by proper naming and description of its categories. A few take it one step further by using topic templates.

Maybe there’s more to be gained by recursively requesting the most problematic topics to abide by certain standards. Don’t hesitate to close flame-bait topics down – ask them instead to rewrite their post according to certain best practice principles, or post it elsewhere. is your house after all so you shouldn’t be afraid to enforce pretty strict rules of civilized discourse.

One very interesting subject you touched on was the value of engagement with regards to particularly inflamed discussions. Similar to how so much of news media has resorted to prioritising divisive and shocking news reporting to maximise their ad impressions (i.e. max engagement), certain community members are very good at starting controversial discussions in the name of constructive feedback, which can quickly suck up a lot of the oxygen in a community. Discourse is already somewhat opinionated on what a successful community looks like, but we can do more to gently push community owners into the pit of success. Two things that immediately come to mind:

Engagement by itself is worthless

Most Discourse forums are not ad-based, so high engagement in an unconstructive debate is in-fact a negative-sum game. The absence of the “any traffic is good traffic”-incentive is something we can use to our advantage and we should be mindful of this in our user documentation and dashboard statistics.

How to curtail negative-sum engagement

Much easier said than done, but I think it’s well worth pondering. Discourse has lots of micro-optimisations already in place for this purpose (e.g. not allowing downvotes, rate-limiting posters who are dominating a topic etc.) and your suggestions could lead to further improvements. I’m gonna create #feature stubs for your suggestions so far so that the community can help us flesh them out.

Lastly, did you see a big difference in tone and overall attitude between your Discourse and Reddit community? I see that in your communications you prioritise your Slack chat and Discourse forum over Reddit (e.g. on your community page it’s just listed for the purpose of “discuss blog posts”) which I think is the right way to go, but I wonder if if could be taken even further. I’ll expand if we take this discussion further.


OMG, this! :arrow_double_up: For those not having time to look at the talk, I highly recommend at least reading this post. It is one of these cases where you have something on your mind, and then someone writes about it and it suddenly all makes sense. :slight_smile: Thanks @evancz for taking time to write that. I found it highly relevant even outside the Open Source community.


Thank you for taking a look at these resources! It is really exciting to see other people thinking about the same problems! :smiley:

Some comments stuck out to me that I want to mention.

I wouldn’t say that the problem is actually that the discourse is not civil. One of the things I did not have time to talk about explicitly is asymmetric time costs. Say there is a decision that is the result of a very complex balance of concerns. Person X will say “I think this is bad” and someone in the community may spend an hour writing up the complex situation. Person X does not find this satisfying, so they just keep saying “I think this is bad” any time it is vaguely relevant. The cost of this self-expression is that lots of other posters end up reacting. Everyone is civil enough, but there is no “agree to disagree” that ejects everyone from the loop. So an asymmetric time cost is when one person can invest a small amount of time to spend a great deal of other people’s time without breaking any rules. Often these cases have no natural limit.

We experimented with banning cases like this, and we definitely paid for that. I think six people got banned total, and it creates some very strange narratives about “they don’t want to hear disagreement” which I think is misdiagnosing a complex situation. (Another asymmetric time cost conversation!) So I hear you about not being afraid about this, but I just want to point out that it has a notable cost as well.

I think (1) the work instead goes to the moderators, except now it takes much longer overall and is perceived as adversarial, and (2) everyone else pays while things are unresolved. Moderators are very rarely talking to people who are easy to communicate with online, so “hey, can you try to X because of Y?” is heard in a very different way, and the interaction can end up significantly disrupting that moderators day in terms of emotions and productivity.

So I am personally okay if such a design “reduces engagement” with the forum. On some level, it is trimming down the infinite conversation possibilities to a smaller infinity, so one could reasonably expect less conversation. I could also see how conversations generally being more constructive could attract people who currently don’t feel comfortable participating, increasing the overall level.

Ultimately though, my hope is that having structured conversations wouldn’t feel like a burden for people.


Dude, you just reinvented Stack Overflow. And Stack is kind of infamous.

4. Sadness after being told to act more like a robot

As another sign of its inhumanity, Stack Overflow discourages greetings and thanks.

  • “just tried to write an answer on stack overflow, it’s a horrible experience, but what really surprised me is that they edited my answer and removed the “compassionate parts”… don’e read this thread if you want to stay positive today”
  • “A user with a mere 4,000 reputation edited the tags on my first question and took the opportunity to remove me saying ‘thanks’…That may seem like a tiny thing to some people, but I found it immensely offputting that a stranger was bothered enough by two words of common politeness to silently remove them from my post.”

Robots may not have use for these words, but humans use them to make others feel welcome and appreciated.

I’m not at all convinced that locking people into a finite set of “intents” (in Stack’s case, questions, answers, and clarification-requesting comments, just like your “Learning intent”) is the answer. Partly because it comes across as robotic and neurotic, and partly because people simply lie about their intent, and partly because people find ways to be horrible within the system (snarky answers, etc).


We tend to close topics like that. I’ve seen @sam used the timed close function (where a topic will be closed for {x} hours or days, then automatically reopen) when there’s a long, thoughtful writeup of the pros and cons but a lot of knee-jerk “this is still bad” replies.

I find that this tends to be topic-specific, so temporarily closing those particular problem topics tends to solve the problem. If it doesn’t solve the problem, because the person is doing this in many or all topics they participate in … that violates multiple guidelines I defined in What If We Could Weaponize Empathy? such as

  • Endless Contrarianism
  • Axe-Grinding
  • Griefing
  • Persistent Negativity
  • Ranting
  • Grudges

If one or many of these criteria is met, for a sustained period of time, that’s ample grounds for suspension.

Feel free to cite those rules in public when suspending users. We have suspension reasons that can be applied to accounts (and are in fact required for a suspension). Suspensions are always timed in Discourse, so you can also leave the door open to a person reforming later… but the emotional labor of reforming is on them, not you.

The goal of Discourse is to amortize effort across the community whenever possible … not to concentrate the moderation (and emotional labor) load on staff.

In general when you talk about Intentional Communication, to me that means specialized software. For example, Stack Overflow has very strict (and necessarily so) norms, as it is learning focused Q&A, where the goal is to create a useful shared artifact for future programmers, not “answer my computer programming tech support question right now.”

I can tell you that for many people they do feel that the style of communication on Stack Overflow is a burden. But what they can’t deny is that structured communication results in very effective search artifacts for future visitors… which of course, is the whole point of those interactions!

(Heh, I just noticed @notriddle posted basically the same thing while I was composing this reply. Do note that the “complaints” in that article are largely misunderstandings of what Stack Overflow is designed to do, and who it’s designed for. Imagine the sheer utter inhumanity of a world where Wikipedia articles don’t start with Hello and end with Thank You Very Much!)

Since Discourse isn’t a specialized communication tool, but a tool for general purpose communication in a variety of disparate communities, there’s a limit to what can be done here.


Of course, you spun it positively, I spun it negatively, but as far as actual facts go, we completely agree: the intentional communication model cannot be the primary means of community-forming. There has to be an unstructured communication channel, warts and all, and if you don’t provide one, people will make one, even if it involves misusing a specialized piece of software.

(which is why Wikipedia not only has areas for general discussion, but features the Talk links at the top of every article, near to the edit button, where would-be editors can’t miss it)

And, my final edit, in which withoutboats from the Rust core team calls for less talk of concrete feature suggestions.


Thanks for the advice on asymmetric time costs! I’ll have our moderators read that.

In my talk, I was trying to say that the “self-expression <=> self-expression” flow is the general purpose flow. That is what all forums use now basically. On top of that, different communities could create more structured flows for their particular needs. So it seems like people are objecting to “all structured flows with no release valves” but that is a rather extreme interpretation of the core idea.

Thanks again for the notes on what ya’ll moderate. I appreciate it!


One year later, how are you feeling about these trends and themes in your community @evancz?

(Also this whole topic is pure gold :trophy: for anyone interested in community thinking, be sure to read it closely and follow the links.)


Thanks for bringing this back up. Very helpful. I hadn’t seen it, and I’m glad to have read it.

Edit: And I say that as somebody that caused a timed topic close here. :wink:


I think the situation is the same.

In trying to answer this question, I was thinking about the book Confidence Men and Painted Women which I read a couple years ago. It presents an era when there were many placeless people. People were moving to new cities and new social circles, so the traditional ways to build trust in communities did not work anymore. Elaborate rituals developed to try to filter for trustworthiness (clothing, manners, mourning, etc.) but it was always possible to do the rituals disingenuously. So the rituals became more elaborate in hopes of creating a better filter, but those could also be performed disingenuously. Etc. I was just reading to learn more about the civil war era, so I was really surprised how relevant it felt to discussions of authenticity, social media, influencers, etc.

Anyway, back then the new places were cities flooded with new residents, so there were physical limits on how design could help with these problems. Now the the new places are sites have been created and designed for specific purposes. Naively this should mean the problems are easier to address (“just change the design!”) but the organizations that control the design for most people benefit immensely from designs that produce conflict (i.e. “high levels of engagement”)

I’d like to explore the ideas from my talk in implementation myself, but knowing how many years Elm has taken so far (and the particular emotional experience that came with that) I have not been personally prepared to pursue this more on my own.


I still feel like “time invested” is impossible to do disingenuously, so that filter is still workable. A person who spends 14 days, 30 days, 60 days… a year… in a community has a verifiable public track record that can be thoroughly vetted.

The Discourse Trust Level System is an attempt to capture a rough form of this, for example once you get to TL3 (regular) we know you’ve at least…

  • visited 50% of the last 100 days
  • read 25% of the topics and posts created in the last 100 days
  • replied to at least 10 topics in the last 100 days

(among other things, see the above link for more specifics)

That does not speak to post quality, of course, or whether the poster has overall positive or negative contributions via their posts, either. That takes human (and community) judgment.


Yes, I suspect that all the others were simply an attempt to figure it out before enough time had elapsed to have that known-entity community judgment.

1 Like

I had an interesting experience reading this. I had some downtime on a Sunday and I was aimlessly scrolling through Twitter. This is not a common pastime, but I like to see what it’s like every now and then. In my timeline I came across a tweet by Jeff from a few days ago which caused a bit of a stir

(@codinghorror apologies for dredging this up, but it provides context for my experience)

I have my own views on the subject that have evolved over time. I had a similar view to the one Jeff expressed at one point. After discussing the topic with friends and others who worked in tech (and some in tech unions) my view has shifted. Some of the things I learnt along the way are expressed in that Twitter thread.

I found myself reading the entire thread. While the thread contains a fair number of useful references and points, these are outweighed by sarcastic, personal or highly charged one-liners. While I don’t hold the initial view Jeff expressed, I found myself briefly (for 3 seconds) tempted to tweet, something I hardly ever do, to point out that sarcasm was not the best way to inform someone (or something like that), and then thought “What the hell am I thinking?”. Positional discussion (aka ‘argument’) on Twitter is largely pointless.

In fact, I thought, why did I just read that entire thread? I could have spent that time outside, with my girlfriend, or doing literally anything else. I already knew what was going to be said in it, but I just kept scrolling. Perhaps out of some perverse curiosity to see what other semi-witty (but mostly pretty banal) put-downs people would come up with next, but also because it was so easy to do so.

So, leaving Twitter, I came over to Meta, saw this topic, watched Evan’s great talk and read some of the linked discussions and the interesting thoughts expressed in this topic, including those by Jeff. As you can imagine, my mind was reeling at this point from the whiplash of being ensnared by the “viral” content and UX on Twitter and the “meta” discussion of that phenomenon going on here.

The thought this left me with is that, like Evan as mentioned, the intention of online discussion is key. Jeff can speak for himself, but it seemed like the intention of the initial tweets was to solicit responses that challenged his view. Whether the various people who responded with one-line takedowns intellectually understood it that way or not, the unstated assumption of the takedowns were that he was stating a political position that reflected something about his identity.

Now, both can be (and typically are) true at once. You can raise something to have a discussion about it, and also be stating a ‘political’ view that reflects your identity. However, what can often happen in an online context, particularly with the constraints of a platform like Twitter, is that the intention behind the statement is overwhelmed by the political or identity aspects of it.

And perhaps in some online contexts, like Twitter, this is to be expected. I think we often mistake (or hope for) Twitter to be a ‘open discussion platform’ when really it’s an ‘open identity platform’ with side businesses of sharing information and humour. It’s largely a way for people to signal their identity and find like-minds. Which has a role in public life, but can be easily misunderstood.

This misunderstanding, about the character of online discussion platforms and posts within those platforms, can be addressed in a number of ways ranging from the community design (as @erlend_sh mentioned), to the structure of posts and posting to include some notion of intention (as Evan mentions).

That said, I wonder whether a “learning intent” badge on Jeff’s tweet would have changed the reaction to it. On balance, I feel like the structural and “character” aspects of the forum of online discussion matter more than people’s appreciation of the specific intent of the poster. As I alluded, I think many of the folks responding with one-line put-downs to Jeff probably did understand, on one level, that he was putting a view out there to learn more about the subject at hand.

Whatever methods you employ, I think what I would like to see more of is a better understanding of the different roles and purposes of different platforms, both in specific cases and in the public debate about them. Twitter is not designed to produce civilised discourse, and Discourse is not designed to produce viral content. But we often view both (and other) platforms through the lens of whatever utopian vision we have for society, politics or the community we’re running and try to mould the platform to that vision.


On Twitter I imagine a badge like that would immediately be subverted in three different directions simultaneously. Some would use it on every one of their posts as a way of virtue signaling. Others would use it sarcastically and as a joke. Others would use it cynically as a way to get more people to see their tweet.

The only way I see to handle this problem is moderation. Somebody has to be willing to judge whether the question is asked in good faith. Moderation is impossible on Twitter, but not in a Discourse instance. Hence, it would work on Discourse, but it would be less likely to be needed in the first place. Sort of an odd catch-22.


You have to take it for what it is – a passing comment that’s based on a pretty random thought. That’s … what Twitter is. And it’s plenty fair to criticize that viewpoint, or I wouldn’t have bothered posting it. I certainly don’t post things on Twitter so that everyone can nod their head along and say “yep Jeff you’re right again boss excellent work keep it up”.

As I told people, I just don’t think about unions… basically ever. Pretty much all I know about unions is based on the movie Norma Rae (which oddly enough a lot of millenials have never even seen, though it is quite famous and won academy awards).

I certainly read all the replies (well most of them) and there were some good points in there, stuff I hadn’t considered, even beyond “it doesn’t have to be about the money”. That is easily the most I have thought about unions in my entire life. So if we go back to the original goal…

That’s not really the goal, though, to completely change someone’s mind. In the ideal case:

  • your positions on issues become more nuanced over time as you gain a greater understanding of the complexity involved and realize how many tradeoffs and exceptions and different ways to view the issue there are.
  • you can effectively argue the “opposite” site of an issue, even if you don’t agree with it, to prove you understand it.

In the average political discussion, if you’ve added even one data point or narrative that gets to people think a little more about their position and appreciate the nuances of the situation better … that’s about the best you can ever do.

Changing people’s minds? :laughing: that takes decades. Often the whole world has to change around people before they’ll do that.

… I still don’t really agree that unions make much sense in tech as a whole, but I certainly appreciate the nuances a lot better. And I definitely appreciate that the whole union topic is highly charged and contentious, way more than I knew.

“Learning intent” doesn’t really make sense; you don’t know what you don’t know, pretty much by definition. But if you look at the platform as …

Twitter is a place to share off the cuff, ephemeral thoughts that may or may not make sense

… versus …

Twitter is for peering deeply into the soul of individual people through certified statements that go on their permanent record

… you’ll have a very different experience.


Yes, I agree that how you characterize the platform informs how you judge discussion on it.

That said, In the case of Twitter, I think your use of it is not typical. Most people don’t like to have their random thoughts and comments judged as a reflection of their identity or their moral personhood. Assessments that were implicit in quite a few of tweets in response to yours.

Your willingness to put up with those kinds of character assessments in order to use Twitter in the way you’ve described is commendable, but not widely shared.

Because most people are not comfortable with the kind of response you got, they’ll tend to only tweet things that do reflect their identity or how they’d like their moral personhood to be percieved, rather than their random thoughts or comments.

They’re not necessarily looking for total validation, but they’re definitely trying to avoid put-downs. I would say that behaviour is more typical, and that the platform’s character reflects that, i.e. as an identity signalling service.