Discourse AI verursacht neue SSL- und Connection Reset by Peer-Fehler

Priority/Severity:
Recent repository changes make Discourse AI mostly unworkable with current OpenAI API.

Platform:

  • Self-hosted, using standard standalone build
  • Ubuntu 24.04 host VM, Docker containers
  • OpenAI API
  • Anthropic API

Description:

Discourse AI has been calling an external API - OpenAI with models and working well as of Feb 15 (last container rebuild). Today (Feb 21) I rebuilt the container and things are not working.

Here’s what I know:

As at Feb 15
OpenAI models configured and working well:

  • LLM/Persona
    • GPT4 Omni
    • GPT4 Omni Mini
  • Embeddings
    • text-embedding-ada-002

As at Feb 21

All OpenAI models have about 70-80% error rate for LLM calls with the error message “Connection Reset by Peer”. Some chats go through, some fail partway in between. Embedding calls fail with an Faraday::ConnectionFailed SSL error.

Additional OpenAI models fail:

  • o1-mini and o1-preview fail to test/save the LLM with a code error (‘developer’ is not a valid role) because developer role is only valid for o1 and o3 models, not their -mini versions. Source code github.com/discourse/discourse-ai/…/chat_gpt.rb:61 needs to be updated to do an exact model name match not a starts_with match. For the else case on line 73, there is no longer a system user and needs to be updated to be simply user. As of today, o1-mini cannot use tools.

Have tried:

  • Checked OpenAI platform limits and we are well under Rate Limits and OpenAI account is funded.
  • Rebuilding container
  • Deleting and recreating LLM personas and users
  • Deleting and creating LLM models
  • Creating new API token keys
  • Ensuring SSL and certificates were updated inside container
  • Logging into container and using bash & curl to call API (successfully)
  • Logging into rails console RAILS_ENV=production bundle exec rails console and using a http object to call the OpenAI API (successfully)
  • Anthropic API calls to claude-3.5-sonnet (successfully)

Reproducible steps:

Create new container build using the latest Discourse and add the Discourse AI plugin to plugins:

  ...
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/discourse-ai.git

Configure OpenAI LLM and Embeddings models with the following:

  • GPT4 Omni, GPT4 Omni Mini
    • All default values, insert your api key
    • Tokens: 64000
    • Click “Run test”, wait for response, sometimes success, often “Internal Server Error”. When successful, trying to chat with the persona gives the Inference LLM Model stack trace
  • text-embedding-ada-002, text-embedding-3-large
    • Saves successfully, generates error logs, multiple repeating every 5 mins

Internal Server Error Stack Trace

Internal Server Error stack trace
Message (2 copies reported)
Errno::ECONNRESET (Connection reset by peer)
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
lib/middleware/processing_request.rb:12:in `call'
lib/middleware/request_tracker.rb:385:in `call'
Backtrace
openssl (3.3.0) lib/openssl/buffering.rb:217:in `sysread_nonblock'
openssl (3.3.0) lib/openssl/buffering.rb:217:in `read_nonblock'
net-protocol (0.2.2) lib/net/protocol.rb:218:in `rbuf_fill'
net-protocol (0.2.2) lib/net/protocol.rb:199:in `readuntil'
net-protocol (0.2.2) lib/net/protocol.rb:209:in `readline'
net-http (0.6.0) lib/net/http/response.rb:625:in `read_chunked'
net-http (0.6.0) lib/net/http/response.rb:595:in `block in read_body_0'
net-http (0.6.0) lib/net/http/response.rb:570:in `inflater'
net-http (0.6.0) lib/net/http/response.rb:593:in `read_body_0'
net-http (0.6.0) lib/net/http/response.rb:363:in `read_body'
plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response'
plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!'
net-http (0.6.0) lib/net/http.rb:2433:in `block in transport_request'
net-http (0.6.0) lib/net/http/response.rb:320:in `reading_body'
net-http (0.6.0) lib/net/http.rb:2430:in `transport_request'
net-http (0.6.0) lib/net/http.rb:2384:in `request'
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:44:in `step' 
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):12:in `request'
plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!'
net-http (0.6.0) lib/net/http.rb:1632:in `start'
net-http (0.6.0) lib/net/http.rb:1070:in `start'
plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!'
plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!'
plugins/discourse-ai/lib/completions/llm.rb:281:in `generate'
plugins/discourse-ai/lib/configuration/llm_validator.rb:36:in `run_test'
plugins/discourse-ai/app/controllers/discourse_ai/admin/ai_llms_controller.rb:128:in `test'
actionpack (7.2.2.1) lib/action_controller/metal/basic_implicit_render.rb:8:in `send_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:226:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rendering.rb:193:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:261:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:121:in `block in run_callbacks'
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
i18n (1.14.7) lib/i18n.rb:353:in `with_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:141:in `run_callbacks'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:260:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rescue.rb:27:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:77:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `block in instrument'
activesupport (7.2.2.1) lib/active_support/notifications/instrumenter.rb:58:in `instrument'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `instrument'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:76:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/params_wrapper.rb:259:in `process_action'
activerecord (7.2.2.1) lib/active_record/railties/controller_runtime.rb:39:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:163:in `process'
actionview (7.2.2.1) lib/action_view/rendering.rb:40:in `process'
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:115:in `block in profile_method' 
actionpack (7.2.2.1) lib/action_controller/metal.rb:252:in `dispatch'
actionpack (7.2.2.1) lib/action_controller/metal.rb:335:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:67:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:50:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:32:in `block in <class:Constraints>'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:62:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:53:in `block in serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:133:in `block in find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `each'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:34:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:896:in `call'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
rack (2.2.11) lib/rack/tempfile_reaper.rb:15:in `call'
rack (2.2.11) lib/rack/conditional_get.rb:27:in `call'
rack (2.2.11) lib/rack/head.rb:12:in `call'
actionpack (7.2.2.1) lib/action_dispatch/http/permissions_policy.rb:38:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
rack (2.2.11) lib/rack/session/abstract/id.rb:266:in `context'
rack (2.2.11) lib/rack/session/abstract/id.rb:260:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/cookies.rb:704:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:31:in `block in call'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:101:in `run_callbacks'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:30:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/debug_exceptions.rb:31:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/show_exceptions.rb:32:in `call'
logster (2.20.1) lib/logster/middleware/reporter.rb:40:in `call'
railties (7.2.2.1) lib/rails/rack/logger.rb:41:in `call_app'
railties (7.2.2.1) lib/rails/rack/logger.rb:29:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/request_id.rb:33:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
rack (2.2.11) lib/rack/method_override.rb:24:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/executor.rb:16:in `call'
rack (2.2.11) lib/rack/sendfile.rb:110:in `call'
plugins/discourse-prometheus/lib/middleware/metrics.rb:14:in `call'
rack-mini-profiler (3.3.1) lib/mini_profiler.rb:334:in `call'
lib/middleware/processing_request.rb:12:in `call'
message_bus (4.3.9) lib/message_bus/rack/middleware.rb:60:in `call'
lib/middleware/request_tracker.rb:385:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/remote_ip.rb:96:in `call'
railties (7.2.2.1) lib/rails/engine.rb:535:in `call'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `public_send'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `method_missing'
rack (2.2.11) lib/rack/urlmap.rb:74:in `block in call'
rack (2.2.11) lib/rack/urlmap.rb:58:in `each'
rack (2.2.11) lib/rack/urlmap.rb:58:in `call'
unicorn (6.1.0) lib/unicorn/http_server.rb:634:in `process_client'
unicorn (6.1.0) lib/unicorn/http_server.rb:739:in `worker_loop'
unicorn (6.1.0) lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
unicorn (6.1.0) lib/unicorn/http_server.rb:143:in `start'
unicorn (6.1.0) bin/unicorn:128:in `<top (required)>'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'

Viewing the log errors are:

Embeddings Model

Error message in logs: (every 5 mins) Connection reset by peer (Faraday::ConnectionFailed)

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::EmbeddingsBackfill

Embeddings model error Stack trace

Embeddings model error Stack trace
Job exception: 5 errors
Connection reset by peer (Faraday::ConnectionFailed)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:570:in `inflater'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:363:in `read_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:401:in `body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:321:in `reading_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2430:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profil...
Backtrace
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `raise' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `wait_until_resolved!' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:998:in `value!' 
/var/www/discourse/plugins/discourse-ai/lib/embeddings/vector.rb:50:in `gen_bulk_reprensentations' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:134:in `block in populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each_slice' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:36:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
/var/www/discourse/app/jobs/base.rb:379:in `perform' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:137:in `process_queue' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:77:in `worker_loop' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:63:in `block (2 levels) in ensure_worker_threads' 

Inference LLM Model

Error message in logs: Job exception: Connection reset by peer

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::CreateAiReply

LLM model error Stack trace

LLM model error Stack trace
Message
Job exception: Connection reset by peer
Backtrace
openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock' 
openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock' 
net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill' 
net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil' 
net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline' 
net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked' 
net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:570:in `inflater' 
net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:363:in `read_body' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:2433:in `block in transport_request' 
net-http-0.6.0/lib/net/http/response.rb:320:in `reading_body' 
net-http-0.6.0/lib/net/http.rb:2430:in `transport_request' 
net-http-0.6.0/lib/net/http.rb:2384:in `request' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):5:in `request'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:1632:in `start' 
net-http-0.6.0/lib/net/http.rb:1070:in `start' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/llm.rb:281:in `generate' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/bot.rb:65:in `get_updated_title' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:252:in `title_playground' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `ensure in reply_to' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `reply_to' 
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/create_ai_reply.rb:18:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:202:in `execute_job' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:170:in `block (2 levels) in process' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:177:in `block in invoke' 
/var/www/discourse/lib/sidekiq/pausable.rb:132:in `call' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:179:in `block in invoke' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:182:in `invoke' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:169:in `block in process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:136:in `block (6 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:113:in `local' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:135:in `block (5 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq.rb:44:in `block in <module:Sidekiq>' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:131:in `block (4 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:263:in `stats' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:126:in `block (3 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:13:in `call' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:125:in `block (2 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:80:in `global' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:124:in `block in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:39:in `prepare' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:123:in `dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:168:in `process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:78:in `process_one' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:68:in `run' 
sidekiq-6.5.12/lib/sidekiq/component.rb:8:in `watchdog' 
sidekiq-6.5.12/lib/sidekiq/component.rb:17:in `block in safe_thread' 

Any ideas, suggestions, etc would be most welcome

1 „Gefällt mir“

Wir haben im letzten Monat über 2.000 Anfragen an die OpenAI API auf dieser Website gestellt, und wir haben keine solchen Fehler in unseren Protokollen.

Ich habe eine andere Website, die wir hosten und die OpenAI intensiv nutzt, mit 80.000 Anfragen im letzten Monat, und sie hat null Faraday-Fehler und nur zwei „Connection Reset by Peer“-Fehler in diesem Zeitraum.

Ist es möglich, dass Ihr Servernetzwerk schlecht konfiguriert ist? Ich hatte Errno::ECONNRESET (Connection reset by peer) einmal wegen eines fehlerhaften NIC-Treibers.

Ich werde mich darum kümmern.

1 „Gefällt mir“

Wie Sie schrieben, dachte ich ursprünglich, es handelte sich um ein Problem mit dem Netzwerk-Stack. Wiederholte OpenAI API-Aufrufe aus demselben Container funktionieren jedoch einwandfrei.

Dies geschieht auch mit einem sehr aktuellen Build, Commits vom 21. Februar.

Um dies zu beweisen (auf Kosten von Token-Verbrauch), habe ich ein schnelles Skript erstellt, um den OpenAI-Netzwerk-Stack zu testen.

  • Läuft 600 Sekunden (10 Minuten)
  • Führt einen Chat-Abschlussaufruf pro Sekunde durch
  • Ändert den Prompt, um den Cache zu vermeiden

Führen Sie es im Container aus: ./launcher enter app, speichern Sie das folgende Skript, machen Sie es mit chmod +x test_openai.sh ausführbar und rufen Sie es dann mit OPENAI_API_KEY=.... ./test_openai.sh auf.

test_openai.sh
#!/bin/bash

# Dauer der Ausführung
DURATION_SECS=600

# Zähler initialisieren
successful=0
unsuccessful=0
declare -A error_messages

# Funktion zur Berechnung des Prozentsatzes
calc_percentage() {
    local total=$(($1 + $2))
    if [ $total -eq 0 ]; then
        echo "0.00"
    else
        echo "scale=2; ($2 * 100) / $total" | bc
    fi
}

# Funktion zur Anzeige der Statistiken
print_stats() {
    local percent=$(calc_percentage $successful $unsuccessful)
    echo "-------------------"
    echo "Erfolgreiche Aufrufe: $successful"
    echo "Fehlgeschlagene Aufrufe: $unsuccessful"
    echo "Fehlerrate: ${percent}%"
    echo "Fehlermeldungen:"
    for error in "${!error_messages[@]}"; do
        echo "  - $error (${error_messages[$error]} Mal)"
    done
}

end_time=$((SECONDS + DURATION_SECS))

counter=1
while [ $SECONDS -lt $end_time ]; do
    # API-Aufruf mit Timeout durchführen
    response=$(curl -s -w "\n%{http_code}" \
        -X POST \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -d "{
            \"model\": \"gpt-4o-mini\",
            \"messages\": [{\"role\": \"user\", \"content\": \"Use this number to choose a one word response: $counter\"}]
        }" \
        --connect-timeout 5 \
        --max-time 10 \
        https://api.openai.com/v1/chat/completions 2>&1)

    # Letzte Zeile (Statuscode) und Antwortkörper abrufen
    http_code=$(echo "$response" | tail -n1)
    body=$(echo "$response" | sed '$d')

    # Prüfen, ob der Aufruf erfolgreich war
    if [ "$http_code" = "200" ]; then
        ((successful++))
    else
        ((unsuccessful++))
        # Fehlermeldung extrahieren
        error_msg=$(echo "$body" | grep -o '"message":"[^"]*"' | cut -d'"' -f4)
        if [ -z "$error_msg" ]; then
            error_msg="Connection error: $body"
        fi
        # Fehlermeldungszähler erhöhen
        ((error_messages["$error_msg"]++))
    fi

    # Aktuelle Statistiken anzeigen
    print_stats

    ((counter++))
    
    # 1 Sekunde bis zum nächsten Aufruf warten
    sleep 1
done

Bei dem Testskript lag meine Fehlerrate unter 0,5 %, was in dieser Größenordnung akzeptabel ist.

Das deutet darauf hin, dass das Problem bei der Discourse-Software liegt und nicht beim Container oder dem dahinterliegenden Netzwerk-Stack.

Wenn es nicht mit einem aktuellen Commit behoben wurde, werde ich es mir genauer ansehen.

2 „Gefällt mir“

Ich habe die Regression um o1-mini und o1-preview hier behoben:

Ich bin mir jedoch wegen der SSL-Probleme unsicher, wir haben hier nichts an unserer zugrunde liegenden Bibliothek geändert.

Vielleicht hängt das mit dem Streaming zusammen. Versuchen Sie, das Streaming bei Ihrem OpenAI LLM zu deaktivieren und zu sehen, ob sich das Problem dadurch löst? Ihr Test dort verwendet gpt-4o-mini ohne Streaming.

3 „Gefällt mir“

Das ist großartig! Gut gemacht!

Bei der Diagnose habe ich einen weiteren Fehler gefunden – auf der LLM-Konfigurationsseite (/admin/plugins/discourse-ai/ai-llms/%/edit) wird beim Auswählen einer der Optionen „Native Tool-Unterstützung deaktivieren (XML-basierte Tools verwenden) (optional)“ oder „Streaming-Abschlüsse deaktivieren (Streaming- in Nicht-Streaming-Anfragen konvertieren)“ und Klicken auf Speichern eine temporäre „Erfolg!“-Toastmeldung angezeigt, aber beim erneuten Laden der Seite sind beide oder eine der Optionen nicht angehakt.

Die Probleme mit dem Verbindungsabbruch bestehen weiterhin und ich untersuche sie noch, aber es sieht nach einer Kombination aus dem Ruby-Code (FinalDestination / DNS-Auflösung / Faraday) Socket-Handling, kombiniert mit einem Debian 12 Container auf einer Ubuntu 24.04 VM aus.

Ich habe eine Test-Ubuntu 22.04 VM gestartet und es gibt keine Probleme, alle Embeddings und Inferenz funktionieren perfekt. Ich habe noch keinen einzigen Abbruch gesehen.

Ich werde weiter daran arbeiten, vielleicht hängt es mit einer neuen Art zusammen, wie Ubuntu 24.04 den TCP-Stack mit netplan verwaltet.

2 „Gefällt mir“

Danke, das Problem mit der Persistenz ist heute behoben. Können Sie ein Upgrade durchführen und es erneut versuchen?

3 „Gefällt mir“

Okay, ein kleines Update – wir konnten keine direkte OpenAI API-Verbindung über den IP-Bereich des Unternehmens herstellen. Cloudflare sendete etwa 1 ms nach TLS RST-Pakete.

Daher haben wir ein Cloudflare AI Gateway als URL-Drop-in-Ersatz für den OpenAI API-Endpunkt eingerichtet, und es funktioniert einwandfrei mit der LLM-Konfiguration.

Es scheint, dass Cloudflare eine undokumentierte Ratenbegrenzung für unbekannte IP-Bereiche (d. h. nicht Azure, AWS, GCP usw.) hat, die greift. Der 100-Verbindungspool für Embeddings würde dieses Limit auslösen.

Nebenbei bemerkt, bietet Cloudflare eine Authentifizierte Gateway-Funktion, die ein spezielles Header-Token hinzufügt.

Aus deren Dokumentation:

curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai/chat/completions \
  --header 'cf-aig-authorization: Bearer {CF_AIG_TOKEN}' \
  --header 'Authorization: Bearer OPENAI_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{"model": "gpt-4o" .......

Es wäre großartig, wenn es eine Funktion gäbe, um pro LLM Header im LLM-Konfigurationsbildschirm hinzuzufügen.

Auf diese Weise könnten wir den cf-aig-authorization-Schlüssel und -Wert für jeden von uns getätigten Aufruf zur LLM hinzufügen.

Das ist eine schwierige Sache, es ist viel Benutzeroberfläche für einen Ausnahmefall.

Gibt es eine Möglichkeit, dass Sie openrouter.ai ausprobieren können? Dies könnte dieses Problem ebenfalls lösen?

Ich bin nicht kategorisch gegen die Zulassung beliebiger Header, aber es ist eine sehr fortgeschrittene Konfiguration. Vielleicht wäre es in Ordnung, wenn es hinter einer versteckten Website-Einstellung stünde (Website-Einstellung zur Aktivierung der erweiterten Benutzeroberfläche).

Könnte Ihr Unternehmen dazu beitragen, dieses Open-Source-Plugin voranzutreiben?

1 „Gefällt mir“

Ich konnte bisher keine Zustimmung für eine Beitragserstellung erhalten, aber wir werden dranbleiben. Danke für die bisherige Hilfe!