viswanatha
(P Viswanatha Reddy)
October 29, 2025, 12:03pm
1
I have set up the LLM, embedding, and persona features; however, my prompt is not providing the expected responses from the questions and answers or the available wiki resources within our community.
Self-Hosted LLM: mistral:latest
Self-Hosted Embedding: all-minilm:latest
Persona:
Could anyone help me further with what needs to be done?
Falco
(Falco)
October 29, 2025, 7:29pm
2
What exact Mistral model is this?
Can you share the prompt?
viswanatha:
You can add Search as a “Forced Tool” for the first interaction to help ground the model.
viswanatha
(P Viswanatha Reddy)
October 30, 2025, 6:34am
3
@Falco , The below are the details for tailing reply:
What exact Mistral model is this?
LLM Model : mistral:latest
Link:
The model at https://ollama.com/library/mistral:latest is:
Mistral 7B v0.3 — a 7 billion parameter open-source model released by Mistral AI.
Can you share the prompt?
System prompt:
You are a community knowledge assistant designed for this forum called {site_title} and with site URL {site_url}, having engineers as users.
Always search and reference relevant forum posts, wiki articles, and tagged discussions before generating an answer.
Your first priority is to use retrieved forum content (via embeddings search) to craft responses.
Prefer summaries and citations from existing posts.
If multiple related topics are found, combine them clearly.
Only if no relevant content exists, respond using your general knowledge through the LLM.
Include topic titles or URLs when referencing posts.
Never hallucinate or invent answers not supported by forum data.
Be factual, concise, and professional.
When users ask broad questions, prefer summarizing multiple sources rather than guessing.
Always prefer context from categories, tags, and wikis indexed in embeddings.
Updated the forced Tool as suggested
Falco
(Falco)
October 30, 2025, 1:06pm
4
I’m afraid a rehashed 2023 model won’t make the cut here. Also, from Ollama own documentation for this model, it only supports tool calling on the raw API, which is not what we use.
Overall, this is a poor choice for AI Bot today.
Instead, use one of the following
1 Like
viswanatha
(P Viswanatha Reddy)
October 31, 2025, 5:15am
5
Hi @Falco , and afraid, thank you for the Inputs. We will check and update you further.
1 Like
viswanatha
(P Viswanatha Reddy)
November 4, 2025, 7:04pm
6
@Falco Facing 502 error while enabling LLM. Please find the Logs below
Message
Unicorn worker received USR2 signal indicating it is about to timeout, dumping backtrace for main thread
config/unicorn.conf.rb:203:in `backtrace'
config/unicorn.conf.rb:203:in `block (2 levels) in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `wait_readable'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:158:in `read_status_line'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:147:in `read_new'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2420:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `catch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:45:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:168:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1632:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1070:in `start'
/var/www/di...
Backtrace
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `block in warn'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `warn'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `public_send'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `block (2 levels) in ensure_logging_thread_running'
<internal:kernel>:187:in `loop'
/var/www/discourse/lib/signal_trap_logger.rb:37:in `block in ensure_logging_thread_running'
Falco
(Falco)
November 4, 2025, 7:10pm
7
Set provider as OpenAI if you are using the OpenAI compatible API of Ollama.
1 Like
viswanatha
(P Viswanatha Reddy)
November 4, 2025, 7:18pm
8
@Falco
I have tired with openAI provider still issue is persists.
Message
Unicorn worker received USR2 signal indicating it is about to timeout, dumping backtrace for main thread
config/unicorn.conf.rb:203:in `backtrace'
config/unicorn.conf.rb:203:in `block (2 levels) in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `wait_readable'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:229:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:158:in `read_status_line'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:147:in `read_new'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2420:in `block in transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `catch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2411:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/mini_profiler/profiling_methods.rb:45:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-4.0.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:168:in `block in perform_completion!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1632:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:1070:in `start'
/var/www/di...
Backtrace
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `block in warn'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:231:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2.1/lib/active_support/broadcast_logger.rb:130:in `warn'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `public_send'
/var/www/discourse/lib/signal_trap_logger.rb:40:in `block (2 levels) in ensure_logging_thread_running'
<internal:kernel>:187:in `loop'
/var/www/discourse/lib/signal_trap_logger.rb:37:in `block in ensure_logging_thread_running'
Falco
(Falco)
November 4, 2025, 7:30pm
9
Can your Discourse container access the service in the port 11434? If it’s running on the host you need to provide a way for the network cross the container boundary.
viswanatha
(P Viswanatha Reddy)
November 4, 2025, 7:33pm
10
Yes, I was able access one of the LLM model ealier with same port with Discourse container
viswanatha
(P Viswanatha Reddy)
November 4, 2025, 7:56pm
11
Could you please help on this and may I know the minimum RAM requirment, where this LLM is running.
Falco
(Falco)
November 4, 2025, 10:19pm
12
So it may be
Ah, I got it! The environment variable requires a special syntax in the enumeration—pipes instead of commas.
DISCOURSE_ALLOWED_INTERNAL_HOSTS: “localhost|127.0.0.1|172.17.0.1”
That depends on the model and context size.
viswanatha
(P Viswanatha Reddy)
November 17, 2025, 12:37pm
13
Hi @Falco
I have successfully integrated the Language Model (LLM) outlined below. However, I’ve encountered an issue during my querying process: the prompt fails to account for the specific content of my forum. Instead of drawing from the relevant discussions and insights within my forum, it is generating responses based on pre-existing information from the LLM. What steps should I take to rectify this situation and ensure that the model effectively incorporates the unique contributions of my forum?
LLM Used as suggested previously:
Example:
Falco
(Falco)
November 17, 2025, 1:41pm
14
On the persona Tools section, ensure it has access to both Search and Read , and set Search to Forced tool .
Also, is the forum content where the Persona is supposed to search all public?
viswanatha
(P Viswanatha Reddy)
November 18, 2025, 9:40am
15
@Falco ,
I have correct the above settings still it is same.
I would like my bot to provide responses derived from the content stored in my Knowledge Base. Could you please share the comprehensive settings related to the AI plugin and others? Additionally, is there a need for us to execute any specific commands within the application to enable Retrieval-Augmented Generation (RAG)?