LLM-gebruikslimieten configureren in Discourse AI

It seems we can’t completely prohibit a group from using a specific model by setting the quota to 0.

Could you add support for this setting?