Huge thanks to @nat for looking into this, for others reference, the ai_translation_backfill_hourly_rate is different from ai_summary_backfill_hourly_rate and I was mixing them up.
@nat it looks like the Translations backfill is now enabled, I see this in the dashboard
Translation Progress
Backfill settings are configured to translate all posts after June 25, 1998 — this is estimated to take 10 hours.
However in the logs I see this error
Message (9 copies reported)
Job exception: limit
Backtrace
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:10:in `execute'
/var/www/discourse/app/jobs/base.rb:318:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management.rb:17:in `with_connection'
/var/www/discourse/app/jobs/base.rb:305:in `block in perform'
/var/www/discourse/app/jobs/base.rb:301:in `each'
/var/www/discourse/app/jobs/base.rb:301:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:220:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:185:in `block (4 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:180:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/discourse_event.rb:6:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/pausable.rb:131:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job/interrupt_handler.rb:9:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:26:in `track'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:134:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:173:in `invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:184:in `block (3 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:145:in `block (6 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:118:in `local'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:144:in `block (5 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/config.rb:39:in `block in <class:Config>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:139:in `block (4 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:281:in `stats'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:134:in `block (3 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:15:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:133:in `block (2 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:85:in `global'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:132:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:40:in `prepare'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:131:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:183:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:86:in `process_one'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:76:in `run'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:10:in `watchdog'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:19:in `block in safe_thread'
The translate job runs every 5 minutes, meaning the job will run 12 times an hour. There is some integer division going on here making the rate 0 when you take 10 ÷ 12 to get the limit per run.
Done will keep an eye on it, assuming this integer rounding issue will be handled in future.
Does the same issue also impact AI summary backfill maximum topics per hour?
EDIT: I can’t use 20 per hour with the current model (along with other AI featuers), it’s overflowing the model rate limits. Can I use a number below 20 like 12?
Oddly it seems to be hitting the limits. Before starting translations backfill, using discourse AI search/helper bot would work fine (which was the primary use of AI). But now it’s hitting the Gemini Flash 2.5 limits which is is 10 RPM, 250,000 TPM and 250 RPD. From the logs it appears that it’s hitting the 250 RPD limit per day for some reason when backfill is enabled.
Is there a way to debug this to see how many requests it’s sending out or how often it’s sending out requests?
Message
DiscourseAi::Completions::Endpoints::Gemini: status: 429 - body: {
"error": {
"code": 429,
"message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/usage?tab=rate-limit. \n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_free_tier_requests, limit: 250\nPlease retry in 45.454864372s.",
"status": "RESOURCE_EXHAUSTED",
"details": [
{
"@type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Learn more about Gemini API quotas",
"url": "https://ai.google.dev/gemini-api/docs/rate-limits"
}
]
},
{
"@type": "type.googleapis.com/google.rpc.QuotaFailure",
"violations": [
{
"quotaMetric": "generativelanguage.googleapis.com/generate_content_free_tier_requests",
"quotaId": "GenerateRequestsPerDayPerProjectPerModel-FreeTier",
"quotaDimensions": {
"location": "global",
"model": "gemini-2.5-flash"
},
"quotaValue": "250"
}
]
},
{
"@type": "type.googleapis.com/google.rpc.RetryInfo",
"retryDelay": "45s"
}
]
}
}
Backtrace
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:218:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:217:in `map'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:217:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:129:in `error'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:189:in `block (2 levels) in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2428:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http/response.rb:320:in `reading_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2425:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2379:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:51:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:187:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:1626:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:1064:in `start'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:140:in `perform_completion!'
/var/www/discourse/plugins/discourse-ai/lib/completions/llm.rb:447:in `generate'
/var/www/discourse/plugins/discourse-ai/lib/personas/bot.rb:90:in `reply'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:53:in `get_translation'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `block in translate'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `map'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `translate'
/var/www/discourse/plugins/discourse-ai/lib/translation/topic_localizer.rb:12:in `localize'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:40:in `block (2 levels) in execute'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activerecord-8.0.4/lib/active_record/relation/delegation.rb:101:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activerecord-8.0.4/lib/active_record/relation/delegation.rb:101:in `each'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:38:in `block in execute'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:22:in `each'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:22:in `execute'
/var/www/discourse/app/jobs/base.rb:318:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management.rb:17:in `with_connection'
/var/www/discourse/app/jobs/base.rb:305:in `block in perform'
/var/www/discourse/app/jobs/base.rb:301:in `each'
/var/www/discourse/app/jobs/base.rb:301:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:220:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:185:in `block (4 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:180:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/discourse_event.rb:6:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/pausable.rb:131:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job/interrupt_handler.rb:9:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:26:in `track'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:134:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:173:in `invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:184:in `block (3 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:145:in `block (6 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:118:in `local'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:144:in `block (5 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/config.rb:39:in `block in <class:Config>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:139:in `block (4 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:281:in `stats'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:134:in `block (3 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:15:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:133:in `block (2 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:85:in `global'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:132:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:40:in `prepare'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:131:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:183:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:86:in `process_one'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:76:in `run'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:10:in `watchdog'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:19:in `block in safe_thread'
I don’t recommend using the free tier for translations. Translating a simple post can wipe out all the quota in the free tier.
You can use the following query to check which post is using up the tokens:
-- [params]
-- date :start_date = '2025-11-01'
SELECT
feature_name,
created_at,
post_id,
response_tokens
FROM
ai_api_audit_logs
WHERE
post_id IS NOT NULL
AND created_at > :start_date
AND (response_tokens IS NULL OR response_tokens != 1)
ORDER BY
created_at DESC
Since I still have access to your site, I created it in your data explorer. Truly it is a post that wiped it out, you can see it on your site at /admin/plugins/explorer/queries/7.
Thanks for creating the data query. I see that it’s using about 1400 tokens per translation. Given that the Gemini tier limits are 250,000 token per minute with no daily limit - not sure if that’s the one.
Per the logs it’s hitting the 250 requests per day limit. Is there a way to check how many requests are being used for each translation (or for that matter any of the AI requests sent by discourse)?
No, we don’t track requests. In fact I don’t recommend using a free tier that is limited by requests for this feature at all. I would suggest paying for a key for translation or turning the feature off.
When using any AI features what we care about is token usage which we have a very good feature for at /admin/plugins/discourse-ai/ai-usage, not number of requests. Our system does not bother with that.
Great pointer, thanks. I do see that this dashboard logs total requests in addition to tokens. It’s awesome that the dashboard added a filter to break this down by service (bot, summary, translations etc).
I asked about requests because even a paid tier has a limit of 10,000 requests per day. Given that a single post transition takes 270 requests it’ll run through the paid tier limits also before I know it.
We successfully used Google Gemini paid tier to translate well over a million posts here on Meta in the last weeks, so I can attest it works just fine.
That’s great - so then why is our instance of discourse using 270 requests to translate a single post? Is there something in the configuration that needs to be tweaked or am I missing something?
You can check your /logs and cycle through the failures to see which jobs they are failing on. You’re using the same key for all AI features, not only translation is using the same API, there is summaries and others as well.
True and great point. I’m going to see how to get higher limits possibly using different projects for each persona or model so that they don’t eat into each others API rate limits.
My question here, is there any advantage in having all personas or AI services in discourse use the same API key? Does it have some context or history that is lost if an AI service uses a different api key vs if they all use the same one? Or is it completely transactional and independent of each other.