In order to use certain Discourse AI features, users are required to use a 3rd party Large Language Model (LLM) provider. Please see each AI feature to determine which LLMs are compatible.
The following guide compares the estimated costs of different LLM providers.
Note that the costs might vary based on multiple factors such as the number of requests, the length of the text, the computational resources used, the models chosen, and so on. For the most up-to-date and accurate pricing, please check with each provider.
- OpenAI: OpenAI’s pricing varies based on the model and usage. For instance, GPT-4 costs $0.03 for 1K tokens (input)/ $0.06 for 1K token (output). You can also start experimenting with $5 in free credit that can be used during your first 3 months.
- Anthropic: Claude-2’s pricing is $11.02 per 1M tokens (input)/ $32.68 per 1M tokens (output). Please check their model pricing for additional details.
- Azure OpenAI: Azure OpenAI pricing is also dependent on the model and capabilities, e.g. GPT-3.5-Turbo pricing is $0.0015 per 1K tokens (input) / $0.002 per 1K tokens (output).
- AWS Bedrock with Anthropic access: On-demand pricing is $0.00163 for 1K input tokens and $0.00551 for 1K output tokens.
- HuggingFace Endpoints with Llama2-like model: Hugging Face’s pricing varies based on the usage and the type of subscription. For instance, the Pro subscription starts at $20 per user per month.
- Run your own OSS Llama2-like model with TGI: The cost of running your own OSS Llama2-like model with TGI would depend on various factors such as the infrastructure costs, the costs associated with fine-tuning the model, and the costs of managing and maintaining the model.