El problema de uso después de usar la traducción de IA

Hey there, have you followed these recommendations?

The usage graph definitely looks concerning. Can you try out this data explorer query:

SELECT 
  a.id,
  a.language_model,
  LENGTH(p.raw) as raw_length,
  a.response_tokens,
  a.raw_request_payload,
  a.raw_response_payload,
  a.topic_id,
  a.post_id
FROM ai_api_audit_logs a
LEFT JOIN posts p ON p.id = a.post_id AND p.deleted_at IS NULL
LEFT JOIN topics t ON t.id = a.topic_id AND t.deleted_at IS NULL
WHERE a.created_at > CURRENT_DATE - INTERVAL '1 days'
AND p.deleted_at IS NULL
AND t.deleted_at IS NULL
AND p.user_deleted = false
AND a.feature_name = 'translation'
AND LENGTH(p.raw) < 1000
AND a.response_tokens > 10000
ORDER BY a.created_at DESC
LIMIT 100

The query should show you the number of response tokens used based on the post’s raw length. Ideally you should see a similar number, not more than 1.5x tokens. The AiApiAuditLog will help with determining what is going on.

Additionally please share,

  • What model are you using?
  • What’s your backfill hourly rate? I suggest to keep it to a low value, like 50 for starters.
  • How many languages are you supporting? Does your selected model support them?