Ollama local não está funcionando com o Plugin

No contêiner, posso usar o ollama-service. Mas no Discourse-Plugin, aparece apenas um “Internal Server Error”.

curl http://172.17.0.1:11434/v1/chat/completions   -H “Content-Type: application/json”   -d ‘{
“model”: “llama3.1:8b”,
“messages”: \[{“role”: “user”, “content”: “Hallo”}\]
}’
{“id”:“chatcmpl-145”,“object”:“chat.completion”,“created”:1760470827,“model”:“llama3.1:8b”,“system_fingerprint”:“fp_ollama”,“choices”:\\[{“index”:0,“message”:{“role”:“assistant”,“content”:“Halló! (That’s the Icelandic pronunciation, by the way) or more commonly: Hallo! How can I help you today?”},“finish_reason”:“stop”}\\],“usage”:{“prompt_tokens”:11,“completion_tokens”:29,“total_tokens”:40}}

Nome: Ollama Llama 3.1 8B
ID do Modelo: llama3.1:8b
Provedor: OpenAI
URL: http://172.17.0.1:11434/v1/chat/completions
Tokenizer: Llama3
Janela de Contexto: 32000

1 curtida

Você pode ir para a página em /logs na sua instância e compartilhar as informações de erro aqui, por favor?

O teste foi feito de dentro do contêiner do Discourse.

Se eu testar pela interface web, recebo um “Erro Interno do Servidor”.

Aqui está o log neste momento:
tail -f /var/discourse/shared/standalone/log/rails/production.log

Started GET “/chat/api/me/channels” for 10.233.21.85 at 2025-10-15 04:33:58 +0000
Processing by Chat::Api::CurrentUserChannelsController#index as JSON
Completed 200 OK in 131ms (Views: 0.1ms | ActiveRecord: 0.0ms (0 queries, 0 cached) | GC: 75.4ms)
Started GET “/admin/plugins/discourse-ai/ai-llms/test.json?ai_llm%5Bmax_prompt_tokens%5D=32000\u0026ai_llm%5Bmax_output_tokens%5D=\u0026ai_llm%5Bapi_key%5D=[FILTERED]\u0026ai_llm%5Btokenizer%5D=DiscourseAi%3A%3ATokenizer%3A%3ALlama3Tokenizer\u0026ai_llm%5Burl%5D=http%3A%2F%2F172.17.0.1%3A11434%2Fv1%2Fchat%2Fcompletions\u0026ai_llm%5Bdisplay_name%5D=Ollama%20Llama%203.1%208B\u0026ai_llm%5Bname%5D=llama3.1%3A8b\u0026ai_llm%5Bprovider%5D=open_ai\u0026ai_llm%5Benabled_chat_bot%5D=false\u0026ai_llm%5Bvision_enabled%5D=false\u0026ai_llm%5Binput_cost%5D=\u0026ai_llm%5Boutput_cost%5D=\u0026ai_llm%5Bcached_input_cost%5D=\u0026ai_llm%5Bprovider_params%5D%5Borganization%5D=\u0026ai_llm%5Bprovider_params%5D%5Bdisable_native_tools%5D=true\u0026ai_llm%5Bprovider_params%5D%5Bdisable_temperature%5D=true\u0026ai_llm%5Bprovider_params%5D%5Bdisable_top_p%5D=true\u0026ai_llm%5Bprovider_params%5D%5Bdisable_streaming%5D=true\u0026ai_llm%5Bprovider_params%5D%5Benable_responses_api%5D=true\u0026ai_llm%5Bprovider_params%5D%5Breasoning_effort%5D=default” for 10.233.21.85 at 2025-10-15 04:34:01 +0000
Processing by DiscourseAi::Admin::AiLlmsController#test as JSON
Parameters: {“ai_llm”=\u003e{“max_prompt_tokens”=\u003e“32000”, “max_output_tokens”=\u003e“”, “api_key”=\u003e“[FILTERED]”, “tokenizer”=\u003e“DiscourseAi::Tokenizer::Llama3Tokenizer”, “url”=\u003e“``http://172.17.0.1:11434/v1/chat/completions”``, “display_name”=\u003e“Ollama Llama 3.1 8B”, “name”=\u003e“llama3.1:8b”, “provider”=\u003e“open_ai”, “enabled_chat_bot”=\u003e“false”, “vision_enabled”=\u003e“false”, “input_cost”=\u003e“”, “output_cost”=\u003e“”, “cached_input_cost”=\u003e“”, “provider_params”=\u003e{“organization”=\u003e“”, “disable_native_tools”=\u003e“true”, “disable_temperature”=\u003e“true”, “disable_top_p”=\u003e“true”, “disable_streaming”=\u003e“true”, “enable_responses_api”=\u003e“true”, “reasoning_effort”=\u003e“default”}}}
Completed 500 Internal Server Error in 45ms (ActiveRecord: 0.0ms (0 queries, 0 cached) | GC: 39.0ms)

app.yml

templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
  - "templates/web.ssl.custom.template.yml"

expose:
  - "127.0.0.1:8080:80"
  - "0.0.0.0:8443:443"

params:
  db_default_text_search_config: "pg_catalog.english"
  db_shared_buffers: "4096MB"

env:
  DISCOURSE_ALLOWED_INTERNAL_HOSTS: "localhost,127.0.0.1,172.17.0.1"
  http_proxy: "http://proxy.de:8080"
  https_proxy: "http://proxy.de:8080"
  no_proxy: "localhost,127.0.0.1,172.17.0.0/16,.firma.de"
  ENABLE_SSL: true
  DISCOURSE_BASE_URL: "https://forum.firma.de:8443"
  DISCOURSE_HOSTNAME: forum.firma.de
  DISCOURSE_PORT: 8443
  DISCOURSE_CDN_URL: "https://forum.firma.de:8443"
  DISCOURSE_FORCE_HTTPS: true
  LC_ALL: en_US.UTF-8
  LANG: en_US.UTF-8
  LANGUAGE: en_US.UTF-8
  UNICORN_WORKERS: 8
  DISCOURSE_DEVELOPER_EMAILS: 'm.k@firma.de'
  DISCOURSE_SMTP_ADDRESS: 10.176.97.14
  DISCOURSE_SMTP_PORT: 25
  DISCOURSE_SMTP_USER_NAME: ""
  DISCOURSE_SMTP_PASSWORD: ""
  DISCOURSE_SMTP_ENABLE_START_TLS: false
  DISCOURSE_SMTP_DOMAIN: forum.firma.de
  DISCOURSE_NOTIFICATION_EMAIL: noreply@forum.firma.de
  DISCOURSE_SMTP_OPENSSL_VERIFY_MODE: none

volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log

hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git

run:
  - exec: echo "Beginning of custom commands"
  - exec: |
      cd /var/www/discourse
      su discourse -c 'bundle exec rails runner "
        SiteSetting.force_https = true
        SiteSetting.port = 8443
        Rails.cache.clear
      "'
  - exec: echo "End of custom commands"

Ah, entendi! A variável de ambiente requer uma sintaxe especial na enumeração — barras verticais em vez de vírgulas.

DISCOURSE_ALLOWED_INTERNAL_HOSTS: “localhost|127.0.0.1|172.17.0.1”

1 curtida

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.