"Communication Style" as part of a user's profile

I’ve been pondering this for a while, especially since the AI sentiment integration, and after a discussion with @maiki. What I’m wondering is, would it be possible to have next to a user’s username or in their profile a short description of how the person tends to behave or communicate? It could be light hearted and whimsicle, for example:

“Forgive Dave, sometimes he speaks a bit more directly than others(but he always means well :slight_smile: )”

Or “Simon is extremely friendly, you may need to search through the friendliness to find his point”

Those are just some very off the top of my head examples. The main use case here, is that we all know people who simply communicate in a way that isn’t the friendliest at times – even though they mean well. Or perhaps someone who is simply a bit quirky in how they say these things. If you are familiar with their style, then you can read it with understading. However if you are not familiar with that person, and it is your first interaction with them, certain communication styles can come across wrong. Directness can be seen as rude, friendliness can be seen as false, and some quirks are just too strange to the reader to see through.

I wonder if something like this would be possible, and if possible would it work in practice?


We currently experimenting with sentiment analysis on user profiles (only staff can see it). I think at some point we might open this up for all users to be able to view it.

I think depending on the models, this type of analysis could be possible in the future by looking at the entire topic/post history for a user. Though this might be limited to a certain extent

I do think this might give off false positives though as how do you truly know if someone means well unless you know them intimately in-person?

Any additional thoughts @sam @Falco?


I hope you’re at least having some internal discussions about why this might be a bad idea.


If this system is implemented, would different standards be applied to different users, based on varied outcomes of analyses of their communication styles, when Moderators decide whether or not to confirm a flag that has been raised against one of their posts?

Please don’t take this question the wrong way. Rather, consider that it is intended as a friendly one being asked with a direct communication style.

1 Like

I do wonder, if allowing users to see their own sentiment analysis, might give challenging users food for thought. You know, like that trick of putting a mirror behind a customer service desk?

Perhaps this could come with links to the hows and whys of changing their communication patterns for particularly challenging users? For example, a user is showing an increase in aggressive sentiment, but they can see that in their own analysis, and be given links to some topics which outline alternative ways to communicate that typically get better outcomes as well.

JIT sentiment analysis in the Composer

Expanding on this slightly, I also mentioned to Saif the idea of having a JIT analysis in the composer. If a hostile tone is coming accross, perhaps the composer gives them a warning to cool off first and come back later?


Encouraging users to be introspective regarding their own behavior does seem a wise practice. The Discourse FAQ and Terms of Service are quite well designed to set a welcoming egalitarian tone within the numerous communities that use this excellent platform for discussion. It is uplifting to join a community when it presents from the start the impression that all members will be expected to conform to the same standards that entail being respectful to one another. It would be unfortunate if a system of sentiment analysis were instead to give the mistaken impression that it might offer some users, such as Dave, in the original post, license to depart from such standards, while the friendly ones, such as Simon, would be expected to continue to behave nicely.


Sentiment analysis being exposed publicly per user is not something I am personally comfortable with. I’m also not even sure if I’m comfortable that staff can see this / or if this should be a feature at all.

My main issue with “sentiment” is that it is inherently an opinion and a judgement based on an interpretation of words. I’m not convinced we should be giving that to AI.

I don’t think we have data that shows what and how sentiment is derived? To my knowledge it’s more of a black box but I could be wrong.

Some issues I have with sentiment analysis:

  • If someone is a person who challenges ideas & offers rebuttals, is that a negative sentiment?

  • What about toxic positivity?

  • Also, is this forum specific? Does it rate based on the general context of the forum as a whole? What if there is a closed forum to discuss ideas that some view as “wrong”, does the AI sentiment decide the right/wrong topic? Or is that derived within community

  • If AI commonly is hallucinating, giving us small inaccuracies all of the time, how can we be sure it wouldn’t hallucinate in its sentiment analysis as well?


Interestingly, I can tell you that your post scored this on the Sentiment classification:


Ha! Point proven.

I am disagreeing and offering reasons why I have concern with this feature, and my post is viewed as “negative”.


I think I’m personally leaning towards “not public” for individual analysis, because labeling someone as “negative” (or whatever) can lead to unfair stereotypes. Sure maybe someone is negative, or maybe English isn’t their first language… or maybe the AI is biased towards certain mainstream cultural norms.

Though, I do think it’s fair that if an admin uses a tool like this… that the individual can see what it says about them.

To make things fully cyclical, I asked Chat-GPT about this discussion… and it poses a reasonable opinion:

It’s crucial to consider the subjective nature of AI sentiment analysis and its potential to misinterpret the context or cultural nuances. Also, we should be mindful of privacy concerns and the risk of labeling or stereotyping users based on AI-generated profiles.

Perhaps a middle ground could be explored. Instead of public labels, we could consider optional tools for personal reflection, allowing users to privately view and contemplate their own communication patterns. This could foster self-awareness without publicly categorizing or judging.

Instead of labeling “positive” “negative” etc this can be posed as “the phrasing you use may sometimes be viewed by others as negative or unhelpful” and perhaps even let the user provide feedback on how accurate this seems from their perspective.


It’s helpful to highlight the actual problem that’s needing to be solved.

There’s a lot I could say about this, but I don’t think it would lead to a useful conversation on here. Maybe the most useful thing I can say is that as someone who (I hope) is generally a non-challenging user, I would not engage with any forum that was posting something like “communication style” or an AI generated sentiment score on my profile. So it might come down to the bottom line - for the possible benefit of improving the communication style of some challenging users, you’d risk losing contributions from some non-challenging users.


Perhaps I should clarify. This area is still experimental and we are trying to learn/improve what can and should be done. Specifically opening this up is a long ways away if ever a reality in the future.


This feels like the better solution to me. Not only does it encourage the user towards improving their posts, the feedback may also spill into improving how they communicate with others in their day to day lives. That would be a nice side effect :slight_smile:

Perhaps such feedback should only appear when a user’s topic is significantly more toxic than the rest of the community, or if their communication style matches one which is commonly misunderstood?

And your disagreement was presented with a very reasonable tone as well. Currently I see that as a refinement issue – A better AI would see that reply as: “Generally opposing something, but with a neutral tone”

In fact, GPT4 does this:

The sentiment of the text is skeptical and concerned about the reliability and ethical implications of AI-driven sentiment analysis.

The tone of the text is cautious and questioning, reflecting unease and skepticism about AI sentiment analysis.

So it’s negative, as in not in favour, rather than negative as in rude/harmful.

Looking through the responses here, I feel the more valuable and acceptable solution would be a JIT intervention in writings that are overly toxic, or contain precursors to misunderstanding before they are posted. The user still has the right to post the original content, we just do our bit to ensure they are adequately informed about the negatives their writing may create in its current form, and present an alternative which may be more productive.

1 Like

That seems a good highly productive alternative :thumbsup: - very supportive of the users on this forum and potentially on many others, if this feature is to be offered widely, and avoids the risk of their thinking that some hidden overall assessment of their communication style is being used to manage them.

This is an interesting topic, I think could be helpful starting first if people wanted to describe their own way of communicating in their own words.

For me I have difficulty communicating over text, except for fairly simple/short communications such as “the meeting is going to be at 10:30 in the morning” or something like that.

With in-person communication I’ve heard that only about 7% of the communication can be defined by the actual words being spoken or written. While there are also limitations to in-person meetings / talking over the phone, the Discourse platform can be helpful mainly for publishing more extended text that may take awhile to review and for many people to be able to communicate simultaneously without interrupting each other.

To avoid misunderstandings it can be helpful to have check-in call meetings at least once a week or month to review where people are at and try to make sense of what is being written.

1 Like

So maybe a plugin like User Notes? Where each member can apply their own private note? Or something viewable by all?

A problem with having things say public if the “labelled” user is profiled by an outside source. They might take offence. Unless it is something they chose.

There is an unmaintained Discourse plugin called User Feedback where users can rate and if configured leave a review. However same issue it could cause calamities if users get offended by what is posted.

I had shall I say an immature user flip out over a :-1: reaction.

A fellow long ago before Discourse added mute user option created a tamper monkey script that allowed the user to add ppl to hide post and for fun had labeling. It decorated admin, Mods, topic Op and you could label users. Each user could have essentially a browser theme component. I could try and see if I can find the script as imagine wouldn’t be too hard to convert to a theme component

Maybe using custom user Field to add user and label.

This is a great solution for a team or group that have such contact opportunities. How would you apply this at scale and and with multiple time zones?

For us there are going to be thousands of co-workers communicating across all time zones, amongst many teams. Unfortunately such calls are not possible. Even if they were, those calls would take important parts of the conversation out of the platform so others could not learn from them :frowning:

Very true. I see this as a reason why it can only be done by a party that cannot benefit from mis-characterising the user – such as an AI.

The more I think about it, the more I feel the original proposal of a communication style is outmatched by a JIT communication coach in the composer. More than anything I love the thought of such communication coaching bleeding into the outside world. The “communication style” simply wouldn’t achieve that, and as you say would make people understandably irate should they feel mis-characterised in any way(and the chances are they will be).

1 Like

Meetings are limited in scale and not recommended for different time zones.

One option is to have question-answer calls where people can write in questions ahead of time, then call is recorded so they can listen to that at a later time.

This is actually a great opportunity to bring up a GDPR principle applicable here, Article 22, about automated decision making.

Having a public negative label applied to your posts or profile is something that a user could reasonably object to and demand a human evaluation.

So the software definitely needs to allow administrator override of what it decides to display publicly.

Labels like those theorized in the OP I would feel comfortable manually applying as an administrator, but the labels I’ve seen from current sentiment analysis technology (this person tends to be negative!) are absolutely not. Also some major messaging issues around what the technology thinks is negative vs what people think when they see negative as a judgement. This is not helped by “explain what the model is doing” being an open research area.


Note, given:

  1. The feature was somewhat unfinished - it supported no drill down so the labels felt very random
  2. The feature leaked into the public by mistake due to an internal bug
  3. We are not even sure the feature provides value

We rolled back this change for now and are regrouping.

Overall I like sentiment analysis, but its a very tricky feature to get right, some places that it can potentially help moderation:

  • Sentiment on the forum this week is highly negative - cause a ton of people are complaining about topic X and Y.
  • User X who is usually highly positive is very very negative this month - maybe reach out?


Less interested in labeling a lot more interested in “anomaly” detection or overall bad trends.