Relleno de traducción de IA no funciona después de configurar todos los ajustes

Muchas gracias a @nat por investigar esto. Para que todos tengan referencia, ai_translation_backfill_hourly_rate es diferente de ai_summary_backfill_hourly_rate y yo las estaba mezclando.

@nat, parece que la retroalimentación de traducciones ahora está habilitada. Veo esto en el panel:

Progreso de la traducción

La configuración de Retroalimentación está configurada para traducir todas las publicaciones después del 25 de junio de 1998; se estima que esto tomará 10 horas.

Sin embargo, en los registros veo este error:

Message (9 copies reported)

Job exception: limit

Backtrace

/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:10:in `execute'
/var/www/discourse/app/jobs/base.rb:318:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management.rb:17:in `with_connection'
/var/www/discourse/app/jobs/base.rb:305:in `block in perform'
/var/www/discourse/app/jobs/base.rb:301:in `each'
/var/www/discourse/app/jobs/base.rb:301:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:220:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:185:in `block (4 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:180:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/discourse_event.rb:6:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/pausable.rb:131:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job/interrupt_handler.rb:9:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:26:in `track'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:134:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:173:in `invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:184:in `block (3 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:145:in `block (6 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:118:in `local'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:144:in `block (5 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/config.rb:39:in `block in <class:Config>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:139:in `block (4 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:281:in `stats'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:134:in `block (3 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:15:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:133:in `block (2 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:85:in `global'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:132:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:40:in `prepare'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:131:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:183:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:86:in `process_one'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:76:in `run'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:10:in `watchdog'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:19:in `block in safe_thread'

¿Puede establecer el valor en 20, por favor?

El trabajo se ejecuta cada 5 minutos, por lo que la tasa de retroceso se ejecuta cada 12 minutos. Hay una división entera en juego aquí que hace que la tasa sea 0 cuando se toma 10 ÷ 12.

1 me gusta

Hecho, lo tendré en cuenta, asumiendo que este problema de redondeo de enteros se resolverá en el futuro.

¿El mismo problema también afecta a AI summary backfill maximum topics per hour?

20 por hora significaría aproximadamente solo 1 cada 5 minutos. Esto definitivamente no alcanzará los límites de tasa.

Oddly it seems to be hitting the limits. Before starting translations backfill, using discourse AI search/helper bot would work fine (which was the primary use of AI). But now it’s hitting the Gemini Flash 2.5 limits which is is 10 RPM, 250,000 TPM and 250 RPD. From the logs it appears that it’s hitting the 250 RPD limit per day for some reason when backfill is enabled.

Is there a way to debug this to see how many requests it’s sending out or how often it’s sending out requests?

Message

DiscourseAi::Completions::Endpoints::Gemini: status: 429 - body: {
  "error": {
    "code": 429,
    "message": "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/usage?tab=rate-limit. \n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_free_tier_requests, limit: 250\nPlease retry in 45.454864372s.",
    "status": "RESOURCE_EXHAUSTED",
    "details": [
      {
        "@type": "type.googleapis.com/google.rpc.Help",
        "links": [
          {
            "description": "Learn more about Gemini API quotas",
            "url": "https://ai.google.dev/gemini-api/docs/rate-limits"
          }
        ]
      },
      {
        "@type": "type.googleapis.com/google.rpc.QuotaFailure",
        "violations": [
          {
            "quotaMetric": "generativelanguage.googleapis.com/generate_content_free_tier_requests",
            "quotaId": "GenerateRequestsPerDayPerProjectPerModel-FreeTier",
            "quotaDimensions": {
              "location": "global",
              "model": "gemini-2.5-flash"
            },
            "quotaValue": "250"
          }
        ]
      },
      {
        "@type": "type.googleapis.com/google.rpc.RetryInfo",
        "retryDelay": "45s"
      }
    ]
  }
}


Backtrace

/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:218:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:217:in `map'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:217:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.4/lib/active_support/broadcast_logger.rb:129:in `error'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:189:in `block (2 levels) in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2428:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http/response.rb:320:in `reading_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2425:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:2379:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:51:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:187:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:1626:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.7.0/lib/net/http.rb:1064:in `start'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:140:in `perform_completion!'
/var/www/discourse/plugins/discourse-ai/lib/completions/llm.rb:447:in `generate'
/var/www/discourse/plugins/discourse-ai/lib/personas/bot.rb:90:in `reply'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:53:in `get_translation'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `block in translate'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `map'
/var/www/discourse/plugins/discourse-ai/lib/translation/base_translator.rb:29:in `translate'
/var/www/discourse/plugins/discourse-ai/lib/translation/topic_localizer.rb:12:in `localize'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:40:in `block (2 levels) in execute'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activerecord-8.0.4/lib/active_record/relation/delegation.rb:101:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activerecord-8.0.4/lib/active_record/relation/delegation.rb:101:in `each'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:38:in `block in execute'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:22:in `each'
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/localize_topics.rb:22:in `execute'
/var/www/discourse/app/jobs/base.rb:318:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management.rb:17:in `with_connection'
/var/www/discourse/app/jobs/base.rb:305:in `block in perform'
/var/www/discourse/app/jobs/base.rb:301:in `each'
/var/www/discourse/app/jobs/base.rb:301:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:220:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:185:in `block (4 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:180:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/discourse_event.rb:6:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/lib/sidekiq/pausable.rb:131:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job/interrupt_handler.rb:9:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:183:in `block in traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:26:in `track'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/metrics/tracking.rb:134:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:182:in `traverse'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/middleware/chain.rb:173:in `invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:184:in `block (3 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:145:in `block (6 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:118:in `local'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:144:in `block (5 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/config.rb:39:in `block in <class:Config>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:139:in `block (4 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:281:in `stats'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:134:in `block (3 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:15:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:133:in `block (2 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_retry.rb:85:in `global'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:132:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/job_logger.rb:40:in `prepare'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:131:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:183:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:182:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `handle_interrupt'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:181:in `process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:86:in `process_one'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/processor.rb:76:in `run'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:10:in `watchdog'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sidekiq-7.3.9/lib/sidekiq/component.rb:19:in `block in safe_thread'

No recomiendo usar el nivel gratuito para traducciones. Traducir una publicación simple puede agotar toda la cuota del nivel gratuito.

Puedes usar la siguiente consulta para comprobar qué publicación está consumiendo la cuota:

-- [params]
-- date :start_date = '2025-11-01'

SELECT 
  feature_name,
  created_at,
  post_id,
  response_tokens
FROM 
  ai_api_audit_logs
WHERE 
  post_id IS NOT NULL
  AND created_at > :start_date
  AND (response_tokens IS NULL OR response_tokens != 1)
ORDER BY 
  created_at DESC

Como todavía tengo acceso a tu sitio, lo creé en tu explorador de datos. Realmente es una publicación la que lo agotó, puedes verlo en tu sitio en /admin/plugins/explorer/queries/7.

Gracias por crear la consulta de datos. Veo que está utilizando aproximadamente 1400 tokens por traducción. Dado que los límites del nivel Gemini son de 250,000 tokens por minuto sin límite diario, no estoy seguro si ese es el correcto.

Según los registros, está alcanzando el límite de 250 solicitudes por día. ¿Hay alguna forma de verificar cuántas solicitudes se están utilizando por cada traducción (o, por cierto, cualquier solicitud de IA enviada por Discourse)?

No, no rastreamos solicitudes. De hecho, no recomiendo usar un plan gratuito limitado por solicitudes para esta función en absoluto. Sugeriría pagar por una clave de traducción o desactivar la función.

Al utilizar cualquier función de IA, lo que nos importa es el uso de tokens, para lo cual tenemos una excelente función en <admin/plugins/discourse-ai/ai-usage>, no el número de solicitudes. Nuestro sistema no se preocupa por eso.

2 Me gusta

Buen punto, gracias. Veo que este panel registra el número total de solicitudes además de los tokens. Es genial que el panel haya añadido un filtro para desglosar esto por servicio (bot, resumen, traducciones, etc.).


Supongo que mi pregunta es: si solo se tradujo una publicación, ¿por qué se necesitan más de 250 solicitudes para hacerlo?

Pregunté sobre las solicitudes porque incluso un plan de pago tiene un límite de 10,000 solicitudes por día. Dado que una sola transición de publicación requiere 270 solicitudes, se agotarán los límites del plan de pago antes de que me dé cuenta.

Hemos utilizado con éxito el nivel pago de Google Gemini para traducir más de un millón de publicaciones aquí en Meta en las últimas semanas, por lo que puedo atestiguar que funciona perfectamente.

1 me gusta

Eso es genial, entonces, ¿por qué nuestra instancia de Discourse está utilizando 270 solicitudes para traducir una sola publicación? ¿Hay algo en la configuración que necesite ser ajustado o me estoy perdiendo algo?

Puedes revisar tus /logs y navegar por los fallos para ver en qué trabajos están fallando. Estás utilizando la misma clave para todas las funciones de IA, no solo la traducción utiliza la misma API, también hay resúmenes y otras funciones.

2 Me gusta

Punto verdadero y excelente. Voy a ver cómo obtener límites más altos, posiblemente utilizando proyectos diferentes para cada persona o modelo, para que no se superpongan sus límites de tasa de API.

Mi pregunta aquí es: ¿hay alguna ventaja en que todas las personas o servicios de IA en Discourse utilicen la misma clave de API? ¿Tiene algún contexto o historial que se pierda si un servicio de IA utiliza una clave de API diferente en comparación con si todos usan la misma? ¿O es completamente transaccional e independiente entre sí?

Puedes crear múltiples instancias del mismo LLM con diferentes claves de API.

2 Me gusta