Pricing for LLM providers used for Discourse AI

:information_source: In order to use certain Discourse AI features, users are required to use a 3rd party Large Language Model (LLM) provider. Please see each AI feature to determine which LLMs are compatible.

:warning: The following guide links to the pricing of different LLM providers.

Note that the costs might vary based on multiple factors such as the number of requests, the length of the text, the computational resources used, the models chosen, and so on. For the most up-to-date and accurate pricing, regularly check with each provider.


This is defenetly not statistically acquire comparison, but based on my short testing using OpenAI GPT-4 is three times more expensive than GPT-3.5 Turbo when counted API calls and how many tokens was used — and because moneywise tokens used by GPT-4 are more expensive that difference is much bigger.

And I got no benefits with GPT-4 compared to 3.5 Turbo.

And as a disclaimer: I used finnish, so english can be different thing. Plus any AI is totally useless in chat use when used finnish, but that is totally different ball game — but means, from my point of view, all chatbots are just pure waste of money when used small languages.

The costs here are estimated and agreed that the costs can vary quite dramatically based on usage!

It’s important to note that for many basic tasks, the difference between GPT-4 and GPT-3.5 models may not be significant. However, GPT-4 does have some substantiated differences in terms of its capabilities, creative understanding, and raw input.

I also agree that for languages that are not popular, there is much to be desired in the model’s abilities.

1 Like

I think we are talking about same thing, but to be on safe side :smirk: : that is an issue of AI companies and you, I or any dev can’t change that fact.

But I’m after something like we all should follow a bit how much we are spending money (if we aren’t using money from othet budget than from ours pocket :smirk: ) and trying to find balance of very sujective usefullness and money.

And no, I don’t know what I’m talking about. Mainly because responses of all chat bots are basically just based on english buzz of millions fly (quantity over quality). Situation can be change - better or worse, it depends - if we have better tools to educate AI what sources it can use. Sure, we have, but it will cost huge much more that price of tokens.

And yes, that is headache of small players.

I’m wondering… is there a chance that we can get a better cost/accuracy balance with more freely prompt editing?

Would you be comfortable disclosing roughly what the cost is for Meta at the moment? Even as a ballpark or range would be helpful.

I asked the bot to give an estimate and it provided the following:

I feel like that number is too low, but discounting experimental work and usage from the Team etc, perhaps this isn’t far away from what most instances of a similar size to Meta could expect?


Another stupid question but is the math itself valid? Just asking because LLM just can’t count.

My forum is using way more less AI things (via OpenAI) and my fees are over that.

1 Like

The token price that the bot mentioned isn’t accurate. The current pricing for gpt-3.5-turbo-0125 is $0.50 per 1 million input tokens and $1.50 per 1 million output tokens. Going with the assumption of half input and half output, 2.4 million tokens should only cost $2.40. gpt-4 is $30/m input and $60/m output, which would work out to $108 for 2.4m tokens.


Claude Haiku gets very close to GPT-4 performance and half the price of GPT-3.5.

I think you need a super compelling reason to use 3.5 over Claude 3 Haiku.

@Saif can you update the OP with latest pricing from Claude. OP is way out of date.

I am not sure it is worth carrying actual prices cause they change so often.


Updated the OP to just have the links, I agree the prices are ever changing and its better to get the most up-to-date info

1 Like