In order to use certain Discourse AI features, users are required to use a Large Language Model (LLM) provider. Please see each AI feature to determine which LLMs are compatible.
If cost is a significant worry, one way to combat that is to set usage limits right from the vendor and use a monthly budget. Another option is to only let select users and groups access the AI features
There are several variable factors to consider when calculating the costs of using LLMs
A simplified view would be…
Important to understand what are tokens and how to count them
- LLM model and pricing → Identifying the specific LLM model you plan to use and finding its latest pricing details for input and output tokens
- Input tokens → The average length of your input prompts in tokens
- Output token → This is the model’s responses in tokens
Now let’s go through the example of AI Bot usage right here on Meta
There were a lot of simplifications that were made during this calculation such as token usage, users using AI Bot, and average number of requests. These numbers should only be taken as general guidelines. Especially since we do a ton of experimentation with AI Bot
-
Using Data Explorer to understand the average request/response tokens and all the other data here
-
On average response tokens were 3x to 5x bigger than request tokens [1]
-
Assume an average user request token to be 85, equivalent to <1 paragraph [2]
-
Assume an average response token to be 85 x 4 = 340 tokens, 3 paragraphs worth
-
Using GPT-4 Turbo from OpenAI, the cost for input tokens would be $10 / 1M token = $0.00001 / token x 85 tokens = $0.00085 for input
-
For output tokens it would be $30.00 / 1M tokens = $0.00003 / token x 340 tokens = $0.0102 for output
-
Total cost per request is $0.00085 + $0.0102 = $0.01105
-
During February 2024, around 600 users were using the AI Bot, making an average of 10 requests for that month. Now assume these numbers are consistent with your community
-
This would mean for February the cost for AI Bot would be $0.01105 x 600 users x 10 requests = $66
-
Fast forwarding this to a year’s cost of running AI Bot, this would be $66 x 12 = $792 for the year for running GPT-4 Turbo as your LLM of choice
Now with GPT-4o you can 1/2 that final cost even further!
-
An estimation looking at the OpenAI community and our own response to request token ratio ↩︎
-
How many words are 85 tokens? While looking at the average user request token usage I found numbers as low as 20 to >100. I wanted to encapsulate that there were more requests closer to 100 and the assumption there is that those requests might be closer to fully formed sentences and refer to well thought out prompts with lots of questions asked to the bot ↩︎
Last edited by @Saif 2024-11-04T21:45:13Z
Check document
Perform check on document: