Estimating costs of using LLMs for Discourse AI

:information_source: In order to use certain Discourse AI features, users are required to use a 3rd party Large Language Model (LLM) provider. Please see each AI feature to determine which LLMs are compatible.

:dollar: If cost is a significant worry, one way to combat that is to set usage limits right from the vendor and use a monthly budget. Another option is to only let select users and groups access the AI features

There are several variable factors to consider when calculating the costs of using LLMs

A simplified view would be…

:information_source: Important to understand what are tokens and how to count them

  • LLM model and pricing → Identifying the specific LLM model you plan to use and finding its latest pricing details for input and output tokens
  • Input tokens → The average length of your input prompts in tokens
  • Output token → This is the model’s responses in tokens

Now let’s go through the example of AI Bot usage right here on Meta

:warning: There were a lot of simplifications that were made during this calculation such as token usage, users using AI Bot, and average number of requests. These numbers should only be taken as general guidelines. Especially since we do a ton of experimentation with AI Bot

  1. Using Data Explorer to understand the average request/response tokens and all the other data here

  2. On average response tokens were 3x to 5x bigger than request tokens [1]

  3. Assume an average user request token to be 85, equivalent to <1 paragraph [2]

  4. Assume an average response token to be 85 x 4 = 340 tokens, 3 paragraphs worth

  5. Using GPT-4 Turbo from OpenAI, the cost for input tokens would be $10 / 1M token = $0.00001 / token x 85 tokens = $0.00085 for input

  6. For output tokens it would be $30.00 / 1M tokens = $0.00003 / token x 340 tokens = $0.0102 for output

  7. Total cost per request is $0.00085 + $0.0102 = $0.01105

  8. During February 2024, around 600 users were using the AI Bot, making an average of 10 requests for that month. Now assume these numbers are consistent with your community

  9. This would mean for February the cost for AI Bot would be $0.01105 x 600 users x 10 requests = $66

  10. Fast forwarding this to a year’s cost of running AI Bot, this would be $66 x 12 = $792 for the year for running GPT-4 Turbo as your LLM of choice

Now with GPT-4o you can 1/2 that final cost even further!


  1. An estimation looking at the OpenAI community and our own response to request token ratio ↩︎

  2. How many words are 85 tokens? While looking at the average user request token usage I found numbers as low as 20 to >100. I wanted to encapsulate that there were more requests closer to 100 and the assumption there is that those requests might be closer to fully formed sentences and refer to well thought out prompts with lots of questions asked to the bot ↩︎

7 Likes