功能请求:改进 Discourse AI LLM 设置(模型发现)并添加 AI 配置导入/导出

Hello Discourse team and community,

First, thank you for the ongoing work on Discourse AI. The feature set is impressive and clearly evolving quickly.

I’d like to propose two UX/admin improvements that would significantly reduce setup time and configuration errors for administrators—especially those managing multiple models/providers or multiple environments (staging/production).

1) LLM model auto-discovery for OpenAI-compatible endpoints

Today, when adding LLMs, administrators often need to manually paste a model ID for each entry. For OpenAI-compatible providers and gateways (e.g., self-hosted OpenAI-compatible endpoints, proxy/gateway layers), it would be extremely helpful if the UI could optionally fetch available models after entering the Base URL + API key.

Suggested UX

  • Admin enters Base URL + API key
  • Click “Fetch models”
  • Discourse calls /v1/models (or provider-equivalent)
  • UI presents a searchable dropdown/list
  • Admin selects one or more models to add/configure

Notes

  • If the endpoint does not support listing, the UI can gracefully fall back to manual model ID entry.
  • A refresh button and short-lived caching would be useful, but not required for an initial implementation.

2) Import/Export for Discourse AI configuration

The Discourse AI configuration surface area is large (LLM connections, AI Bot, quotas, feature toggles, etc.). While the flexibility is great, it can be time-consuming and error-prone to set up by clicking through many screens—especially when trying to keep staging and production in sync.

I’d like to request an import/export capability for “Discourse AI configuration” as a whole.

Suggested behavior

  • Export all Discourse AI–related settings to a single file (preferably JSON or YAML for round-tripping)
  • Import the file to apply settings to another environment or restore a known-good configuration
  • Provide a preview/diff and validation (unknown keys, type checks) before applying

Secrets handling

  • API keys could be excluded/masked by default, with an explicit opt-in to export them separately if needed.

Optional (nice-to-have)

  • A spreadsheet-friendly export (CSV/XLSX) for bulk review/editing, while keeping JSON/YAML as the primary, lossless format.

Why this would help

  • Faster onboarding and reconfiguration for admins
  • Fewer mistakes from repetitive manual entry
  • Better parity between environments (staging/prod)
  • Easier auditing and change management via diffs/version control

Questions:

  • Is there an existing recommended “bulk configuration” approach (Admin API / Rails console) that could be formalized into an import/export workflow?
  • Would model discovery be acceptable at least for OpenAI-compatible endpoints where /v1/models is available?

Thanks for considering this request. I’m happy to provide additional details, example workflows, or screenshots of the current setup steps if that would be helpful.

Best regards,

1 个赞