viswanatha
(P Viswanatha Reddy)
Octobre 29, 2025, 12:03
1
I have set up the LLM, embedding, and persona features; however, my prompt is not providing the expected responses from the questions and answers or the available wiki resources within our community.
Self-Hosted LLM: mistral:latest
Self-Hosted Embedding: all-minilm:latest
Persona:
Could anyone help me further with what needs to be done?
Falco
(Falco)
Octobre 29, 2025, 7:29
2
What exact Mistral model is this?
Can you share the prompt?
viswanatha:
You can add Search as a “Forced Tool” for the first interaction to help ground the model.
viswanatha
(P Viswanatha Reddy)
Octobre 30, 2025, 6:34
3
@Falco , The below are the details for tailing reply:
What exact Mistral model is this?
LLM Model : mistral:latest
Link:
The model at https://ollama.com/library/mistral:latest is:
Mistral 7B v0.3 — a 7 billion parameter open-source model released by Mistral AI.
Can you share the prompt?
System prompt:
You are a community knowledge assistant designed for this forum called {site_title} and with site URL {site_url}, having engineers as users.
Always search and reference relevant forum posts, wiki articles, and tagged discussions before generating an answer.
Your first priority is to use retrieved forum content (via embeddings search) to craft responses.
Prefer summaries and citations from existing posts.
If multiple related topics are found, combine them clearly.
Only if no relevant content exists, respond using your general knowledge through the LLM.
Include topic titles or URLs when referencing posts.
Never hallucinate or invent answers not supported by forum data.
Be factual, concise, and professional.
When users ask broad questions, prefer summarizing multiple sources rather than guessing.
Always prefer context from categories, tags, and wikis indexed in embeddings.
Updated the forced Tool as suggested
Falco
(Falco)
Octobre 30, 2025, 1:06
4
I’m afraid a rehashed 2023 model won’t make the cut here. Also, from Ollama own documentation for this model, it only supports tool calling on the raw API, which is not what we use.
Overall, this is a poor choice for AI Bot today.
Instead, use one of the following
viswanatha
(P Viswanatha Reddy)
Octobre 31, 2025, 5:15
5
Hi @Falco , and afraid, thank you for the Inputs. We will check and update you further.
1 « J'aime »
viswanatha
(P Viswanatha Reddy)
Novembre 4, 2025, 7:04
6
@Falco Facing 502 error while enabling LLM. Please find the Logs below
Message
Unicorn worker received USR2 signal indicating it is about to timeout, dumping backtrace for main thread
config/unicorn.conf.rb:203:in `backtrace'
config/unicorn.conf.rb:203:in `block (2 levels) in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `wait_readable'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:158:in `read_status_line'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:147:in `read_new'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2420:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `catch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:45:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:168:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1632:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1070:in `start'
/var/www/di...
Backtrace
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `block in warn'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `warn'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `public_send'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `block (2 levels) in ensure_logging_thread_running'
<internal:kernel>:187:in `loop'
/var/www/discourse/lib/signal_trap_logger.rb:37:in `block in ensure_logging_thread_running'
Falco
(Falco)
Novembre 4, 2025, 7:10
7
Set provider as OpenAI if you are using the OpenAI compatible API of Ollama.
1 « J'aime »
viswanatha
(P Viswanatha Reddy)
Novembre 4, 2025, 7:18
8
@Falco
I have tired with openAI provider still issue is persists.
Message
Unicorn worker received USR2 signal indicating it is about to timeout, dumping backtrace for main thread
config/unicorn.conf.rb:203:in `backtrace'
config/unicorn.conf.rb:203:in `block (2 levels) in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `wait_readable'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:158:in `read_status_line'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:147:in `read_new'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2420:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `catch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:45:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:168:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1632:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1070:in `start'
/var/www/di...
Backtrace
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `block in warn'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `warn'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `public_send'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `block (2 levels) in ensure_logging_thread_running'
<internal:kernel>:187:in `loop'
/var/www/discourse/lib/signal_trap_logger.rb:37:in `block in ensure_logging_thread_running'
Falco
(Falco)
Novembre 4, 2025, 7:30
9
Can your Discourse container access the service in the port 11434? If it’s running on the host you need to provide a way for the network cross the container boundary.
viswanatha
(P Viswanatha Reddy)
Novembre 4, 2025, 7:33
10
Yes, I was able access one of the LLM model ealier with same port with Discourse container
viswanatha
(P Viswanatha Reddy)
Novembre 4, 2025, 7:56
11
Could you please help on this and may I know the minimum RAM requirment, where this LLM is running.
Falco
(Falco)
Novembre 4, 2025, 10:19
12
So it may be
Ah, I got it! The environment variable requires a special syntax in the enumeration—pipes instead of commas.
DISCOURSE_ALLOWED_INTERNAL_HOSTS: “localhost|127.0.0.1|172.17.0.1”
That depends on the model and context size.