Discourse AI Failing to translate large number of posts and topics

My experience is quite the opposite. I have a set of instructions that I want to be followed that require understanding the context, which are either ignored by non-thinking models, or applied in wrong situations. I’ve just translated a whole application this way - >3000 strings, with reasoning models giving much better results.

I reduced thinking effort to low based on my findings and got all the translations come through. But I believe limiting the output tokens like that is counter-productive, as thinking models are not restricted from being used in translations, and user have no clue why it’s failing.

The solution could be as simple as further multiplying by 2 if the LLM has thinking enabled. Or exposing a multiplier as a config option.

1 Like