Choice Architecture and the ethics of Discourse

One of the things that most interested me in Discourse was the ‘framing’ of the project as set out by @codinghorror at the outset. He provided both a functional AND ethical case for improving online discussions, through the means of improving forum software. The default ‘Community Rules’ are a terrific exposition of the underlying values and intent of the project. Bravo.

I really like the direction Discourse has gone and there is no intended criticism here at all - but I am not sure there has been sufficient ongoing reflection on how the design choices must continue to reflect the underlying ethical intent of the project.

Over the last year I read ‘Hooked - how to build habit-forming products’ by Nir Eyal and more recently ‘Irresistible’ by Adam Alter. Both of which are excellent and well worth a read. Most of my career has actually been in the field of Addictions so the whole arena of ‘habits’ is of considerable interest; the ethical implications of manipulating choice architecture to bring about behaviour change in others is complicated and I do not think that society has fully worked out where the line is crossed to the harmful exploitation of vulnerabilities (bugs if you will) in the ways human brains function.

This has been percolating - and then I listened to a Sam Harris podcast with Tristan Harris: What is Technology Doing to Us. Tristan was a ‘Design Ethicist’ at Google - I don’t want to get into trying to explain his ideas other than to say I think he absolutely nails it - a genius on the topic if ever there was one.

My proposition is simply that what he has to say on this podcast is important - and it is important to Discourse. What I would like is a civilised (!) discussion here about the implications this has or might have for Discourse. Discourse is an important project - and if the discussion here is positive I propose that @codinghorror invites Tristan Harris to discuss and comment on the choice architecture within Discourse - my guess is that he would be up for it.

Critically I ask that people ONLY participate in the discussion if they have listened to the podcast from beginning to end. It’s a big listen I know, but its a big and important issue!

https://soundcloud.com/samharrisorg/71-what-is-technology-doing-to

11 Likes

Interesting podcast. I definitely think we need a vocabulary to even discuss the ethical issues. At some point it also becomes a legal issue for what gets disclosed in a privacy policy or ad policy for a site.

How about a chart

  • X axis is Disclosed to Not Disclosed
  • Y axis is Not Creepy to Creepy

The magic quadrant of Not Disclosed/Creepy is the biggest problem area. Of course, Discourse is open source so the disclosure part is less problematic.

I think the Discourse team focuses more on good human factor engineering and not letting people do really stupid things either unintentionally, or on purpose, to cause harm. That’s good engineering and ethics.

2 Likes

Interesting podcast but… a bit academic and perhaps somewhat over generalized into pure philosophy versus actionable advice. I suggest reading these blog posts if you want proof that I have been thinking about this stuff:

https://blog.codinghorror.com/designing-for-evil/
https://blog.codinghorror.com/your-community-door/
https://blog.codinghorror.com/they-have-to-be-monsters/

There are about 6-12 more I could link you to but I think this is enough to start with. The narrative here is intentional:

  • there is evil
  • that evil is often other people
  • therefore build a door and know when and how to close it as needed

(For extra special bonus points, recognize that YOU are going to be the evil in some scenarios. Yes, you. Me. All of us.)

The above is, in a nutshell, why Discourse exists. Discourse is a spear aimed directly at the heart of Facebook. I don’t often talk about it that way in public, but that’s the truth and that is the long, long term view over several decades: Discourse is the anti-Facebook.

The general design principles reflected in Discourse are:

  • above all else encourage (and measure) reading and listening
  • provide excellent one click tools and guidance for dealing with trolls, spammers, and bad faith participants
  • nudge people to remember their better selves “just in time” at the exact time of temptation
  • prevent communities from degenerating over time; build sustainable communities that grow and survive for decades

https://blog.codinghorror.com/the-just-in-time-theory/
https://blog.codinghorror.com/training-your-users/
https://blog.codinghorror.com/can-software-make-you-less-racist/
https://blog.codinghorror.com/level-one-the-intro-stage/

A concrete example of these sorts of nudges in 1.7 and 1.8, a) we added the “get a room” reminder when people respond to the same person over and over in the same topic, and b) we send an encouraging email and award a special badge for two newly arrived users every month whose posts are well-received by the existing community.

As for notifications, the latest work from Dan Ariely says businesses would be well served by batching emails into three periods in the day rather than letting notifications interrupt you all day: morning, after lunch, and end of day.

15 Likes

The word “Discourse” does naturally occur in the podcast towards the end. About people in power lying.

The segment on VR was very cringeworthy. I skipped that.

Here is the essence of that podcast:

Nice to see: guess what forum software the new time-well-spent forum is running on…

5 Likes