LLMs and the impact on customer support

As AI models advance, help and support tickets in forums will become AI-driven. Midjourney is for art, which skips the manual work for art. For-fun sections on forums won’t change that much, due to the fact that AI isn’t neccasary if you are having fun. AI can also code, making components easier to make.

Help and support forums have been using automated processes for a while, whether using a ‘real’ AI tool like ChatGPT will be an improvement remains to be seen, from what I’ve seen, ChatGPT tends to flub technical questions.


I did customer support for years, so have given this a lot of thought. For many reasons, I think it’s important that high quality customer support continues to allow customers to get in touch with a human without too much friction. There’s a good case to be made for allowing customers to use AI powered search as a first attempt for answering their support questions though.


One thing to be very mindful of is that these tools will transform support over the next few years.

It is very easy to say “yuck, not for me”, I want 100% human.

But there are two in-between states we will reach which are far more nuanced.

  1. AI based triage, leaning on AI for first line easy answers which customers can get right away without having to wait for support.

  2. AI based tooling that makes support engineers more productive. Is 21% authored by AI and supervised by human a poison well and violation of trust? I don’t think so… there is nuance here.


Given how often I know more about the product that the first-line support team does, AI can hardly make things worse, and with voice recognition/speech software it might even be easier to hold a conversation, although having a conversation with Alexa or Siri can be frustrating.

I had so many tickets elevated to level 2 or 3 support at one major software company that they wound up giving me a way to bypass the level 1 group.


I’m wary of “can hardly make things worse” types of arguments for anything that involves large scale social changes. There are few complex problems that can’t be made worse. Working gradually towards desired outcomes seems like a more reasonable approach.

My interest in this comes from having worked as a customer support representative. I feel a sense of kinship with the millions of people around the world who are doing similar jobs. Seeing “customer support representative” near the top of lists of jobs that are expected to be impacted by AI is concerning. Note that people from marginalized groups are heavily represented in the customer support workforce.

That sounds like a poorly structured support system. The level 1 group doesn’t necessarily need to be able to solve your problem. What they need to be able to do is understand and empathize with your problem and be motivated to see that it gets resolved.

Discourse is in a position to be a leader in terms of how AI gets integrated into customer support systems. I don’t have much to contribute to that, other than to try to highlight the role of human understanding, empathy, and motivation in customer support work. LLMs can mimic these attributes to some degree, but from my point of view, that is not an appropriate use of the technology.

From a customer’s point of view, to loosely quote Sam Altman (from memory), “no one likes being condescended to by a computer.”** Having LLM support agents attempt to mimic human concern is going to lead to customer frustration.

This is getting off topic from the question about how LLMs will impact forums. If there’s going to be a continued discussion about how Discourse powered customer support systems could be enhanced with AI, it should probably be moved to a separate topic.

** Edit: thinking about it some more, I believe the Sam Altman quote may have been closer to “no one likes being scolded by a machine.” Note that my making this correction is an example of how human motivation often serves a corrective function.


No one likes being asked ‘Are you sure it is plugged in’, either, but I’ve heard it many times.

I too have spent many hours at the help desk. But it was matter of pride that we knew more than our users, and we got quizzed regularly by the help desk supervisor and others to make sure we were up to speed on things. But the first step was always figuring out a combination of what the user was trying to do and just how much the user knew about it.

The help desk staff wrote an Eliza program that mimicked the first few minutes of many support conversations.

I did not complain when that software company gave me a pipeline direct to level 3 support, and neither did the level 3 team. In fact, a couple of the developers (level 4) gave me their direct phone numbers, and would on occasion call ME!


If they don’t start with that then I usually fear they don’t possibly come up with a solution.

What’s frustrating is when they are reading a script that they themselves don’t understand. Or quickly suggest that the solution is to do a full reinstall of the os on your phone.


When not tinkering with Discourse, I lead large teams in support functions. The reality is these models will eventually be more reliable than humans - always on - available anytime from anywhere. There will always be demand for human interaction - I believe - however, I think it’s clear that the next several iterations of LLMs will make it really clear that AI can identify nuance and senitment, and respond accordingly. Perhaps even more effectively than humans as the AI response will be more predictable.

Of course, I am speculating but I’m fairly confident call center/full service strategy will be a much different beast in 5 years.


Does this mean that when AI models become more advanced (we have GPT-4), they will be the main course for answers? This might make humans do less work for answers. Accuracy is important too, double check for errors.

1 Like

100%. It’s not if, but when.


You could make a reasonable argument that search engines have become the main course for answers already. They pretty much put encyclopedias out of business.

A ChatGPT-enabled search engine (that might be redundant) may eventually become difficult to distinguish from a human chat partner.