Estimating costs of using LLMs for Discourse AI

:information_source: In order to use certain Discourse AI features, users are required to use a Large Language Model (LLM) provider. Please see each AI feature to determine which LLMs are compatible.

:dollar: If cost is a significant worry, one way to combat that is to set usage limits right from the vendor and use a monthly budget. Another option is to only let select users and groups access the AI features

There are several variable factors to consider when calculating the costs of using LLMs

A simplified view would be…

:information_source: Important to understand what are tokens and how to count them

  • LLM model and pricing → Identifying the specific LLM model you plan to use and finding its latest pricing details for input and output tokens
  • Input tokens → The average length of your input prompts in tokens
  • Output token → This is the model’s responses in tokens

Now let’s go through the example of AI Bot usage right here on Meta

:warning: There were a lot of simplifications that were made during this calculation such as token usage, users using AI Bot, and average number of requests. These numbers should only be taken as general guidelines. Especially since we do a ton of experimentation with AI Bot

  1. Using Data Explorer to understand the average request/response tokens and all the other data here

  2. On average response tokens were 3x to 5x bigger than request tokens [1]

  3. Assume an average user request token to be 85, equivalent to <1 paragraph [2]

  4. Assume an average response token to be 85 x 4 = 340 tokens, 3 paragraphs worth

  5. Using GPT-4 Turbo from OpenAI, the cost for input tokens would be $10 / 1M token = $0.00001 / token x 85 tokens = $0.00085 for input

  6. For output tokens it would be $30.00 / 1M tokens = $0.00003 / token x 340 tokens = $0.0102 for output

  7. Total cost per request is $0.00085 + $0.0102 = $0.01105

  8. During February 2024, around 600 users were using the AI Bot, making an average of 10 requests for that month. Now assume these numbers are consistent with your community

  9. This would mean for February the cost for AI Bot would be $0.01105 x 600 users x 10 requests = $66

  10. Fast forwarding this to a year’s cost of running AI Bot, this would be $66 x 12 = $792 for the year for running GPT-4 Turbo as your LLM of choice

Now with GPT-4o you can 1/2 that final cost even further!


  1. An estimation looking at the OpenAI community and our own response to request token ratio ↩︎

  2. How many words are 85 tokens? While looking at the average user request token usage I found numbers as low as 20 to >100. I wanted to encapsulate that there were more requests closer to 100 and the assumption there is that those requests might be closer to fully formed sentences and refer to well thought out prompts with lots of questions asked to the bot ↩︎

Last edited by @Saif 2024-11-04T21:45:13Z

Check documentPerform check on document:
9 Likes

We recently shared the following with a customer who was asking about AI search use in Meta and how much that costed us

Last month we did 1104 searches in Meta

  • GPT-4o-mini pricing, which would cost $0.25
  • Using haiku would be $0.53
  • Gemini Flash would be $0.06

We have to pay attention to request tokens which was around 85868 and response tokens which was around 408417 from the LLM

1 Like

Estimated costs for a month of Image Captions in Meta

  • 1019 calls
  • 55M request tokens
  • 34K response tokens

Which would cost, depending on the LLM:

  • Claude Haiku 3: $13.86
  • GPT-4o Mini: $8.31
  • Gemini 1.5 Flash 8B: $2.07
3 Likes