Discourse AI causando nuevos errores de SSL y Connection Reset by Peer

Priority/Severity:
Recent repository changes make Discourse AI mostly unworkable with current OpenAI API.

Platform:

  • Self-hosted, using standard standalone build
  • Ubuntu 24.04 host VM, Docker containers
  • OpenAI API
  • Anthropic API

Description:

Discourse AI has been calling an external API - OpenAI with models and working well as of Feb 15 (last container rebuild). Today (Feb 21) I rebuilt the container and things are not working.

Here’s what I know:

As at Feb 15
OpenAI models configured and working well:

  • LLM/Persona
    • GPT4 Omni
    • GPT4 Omni Mini
  • Embeddings
    • text-embedding-ada-002

As at Feb 21

All OpenAI models have about 70-80% error rate for LLM calls with the error message “Connection Reset by Peer”. Some chats go through, some fail partway in between. Embedding calls fail with an Faraday::ConnectionFailed SSL error.

Additional OpenAI models fail:

  • o1-mini and o1-preview fail to test/save the LLM with a code error (‘developer’ is not a valid role) because developer role is only valid for o1 and o3 models, not their -mini versions. Source code github.com/discourse/discourse-ai/…/chat_gpt.rb:61 needs to be updated to do an exact model name match not a starts_with match. For the else case on line 73, there is no longer a system user and needs to be updated to be simply user. As of today, o1-mini cannot use tools.

Have tried:

  • Checked OpenAI platform limits and we are well under Rate Limits and OpenAI account is funded.
  • Rebuilding container
  • Deleting and recreating LLM personas and users
  • Deleting and creating LLM models
  • Creating new API token keys
  • Ensuring SSL and certificates were updated inside container
  • Logging into container and using bash & curl to call API (successfully)
  • Logging into rails console RAILS_ENV=production bundle exec rails console and using a http object to call the OpenAI API (successfully)
  • Anthropic API calls to claude-3.5-sonnet (successfully)

Reproducible steps:

Create new container build using the latest Discourse and add the Discourse AI plugin to plugins:

  ...
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/discourse-ai.git

Configure OpenAI LLM and Embeddings models with the following:

  • GPT4 Omni, GPT4 Omni Mini
    • All default values, insert your api key
    • Tokens: 64000
    • Click “Run test”, wait for response, sometimes success, often “Internal Server Error”. When successful, trying to chat with the persona gives the Inference LLM Model stack trace
  • text-embedding-ada-002, text-embedding-3-large
    • Saves successfully, generates error logs, multiple repeating every 5 mins

Internal Server Error Stack Trace

Internal Server Error stack trace
Message (2 copies reported)
Errno::ECONNRESET (Connection reset by peer)
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
lib/middleware/processing_request.rb:12:in `call'
lib/middleware/request_tracker.rb:385:in `call'
Backtrace
openssl (3.3.0) lib/openssl/buffering.rb:217:in `sysread_nonblock'
openssl (3.3.0) lib/openssl/buffering.rb:217:in `read_nonblock'
net-protocol (0.2.2) lib/net/protocol.rb:218:in `rbuf_fill'
net-protocol (0.2.2) lib/net/protocol.rb:199:in `readuntil'
net-protocol (0.2.2) lib/net/protocol.rb:209:in `readline'
net-http (0.6.0) lib/net/http/response.rb:625:in `read_chunked'
net-http (0.6.0) lib/net/http/response.rb:595:in `block in read_body_0'
net-http (0.6.0) lib/net/http/response.rb:570:in `inflater'
net-http (0.6.0) lib/net/http/response.rb:593:in `read_body_0'
net-http (0.6.0) lib/net/http/response.rb:363:in `read_body'
plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response'
plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!'
net-http (0.6.0) lib/net/http.rb:2433:in `block in transport_request'
net-http (0.6.0) lib/net/http/response.rb:320:in `reading_body'
net-http (0.6.0) lib/net/http.rb:2430:in `transport_request'
net-http (0.6.0) lib/net/http.rb:2384:in `request'
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:44:in `step' 
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):12:in `request'
plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!'
net-http (0.6.0) lib/net/http.rb:1632:in `start'
net-http (0.6.0) lib/net/http.rb:1070:in `start'
plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!'
plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!'
plugins/discourse-ai/lib/completions/llm.rb:281:in `generate'
plugins/discourse-ai/lib/configuration/llm_validator.rb:36:in `run_test'
plugins/discourse-ai/app/controllers/discourse_ai/admin/ai_llms_controller.rb:128:in `test'
actionpack (7.2.2.1) lib/action_controller/metal/basic_implicit_render.rb:8:in `send_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:226:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rendering.rb:193:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:261:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:121:in `block in run_callbacks'
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
i18n (1.14.7) lib/i18n.rb:353:in `with_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:141:in `run_callbacks'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:260:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rescue.rb:27:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:77:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `block in instrument'
activesupport (7.2.2.1) lib/active_support/notifications/instrumenter.rb:58:in `instrument'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `instrument'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:76:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/params_wrapper.rb:259:in `process_action'
activerecord (7.2.2.1) lib/active_record/railties/controller_runtime.rb:39:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:163:in `process'
actionview (7.2.2.1) lib/action_view/rendering.rb:40:in `process'
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:115:in `block in profile_method' 
actionpack (7.2.2.1) lib/action_controller/metal.rb:252:in `dispatch'
actionpack (7.2.2.1) lib/action_controller/metal.rb:335:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:67:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:50:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:32:in `block in <class:Constraints>'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:62:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:53:in `block in serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:133:in `block in find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `each'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:34:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:896:in `call'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
rack (2.2.11) lib/rack/tempfile_reaper.rb:15:in `call'
rack (2.2.11) lib/rack/conditional_get.rb:27:in `call'
rack (2.2.11) lib/rack/head.rb:12:in `call'
actionpack (7.2.2.1) lib/action_dispatch/http/permissions_policy.rb:38:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
rack (2.2.11) lib/rack/session/abstract/id.rb:266:in `context'
rack (2.2.11) lib/rack/session/abstract/id.rb:260:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/cookies.rb:704:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:31:in `block in call'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:101:in `run_callbacks'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:30:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/debug_exceptions.rb:31:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/show_exceptions.rb:32:in `call'
logster (2.20.1) lib/logster/middleware/reporter.rb:40:in `call'
railties (7.2.2.1) lib/rails/rack/logger.rb:41:in `call_app'
railties (7.2.2.1) lib/rails/rack/logger.rb:29:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/request_id.rb:33:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
rack (2.2.11) lib/rack/method_override.rb:24:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/executor.rb:16:in `call'
rack (2.2.11) lib/rack/sendfile.rb:110:in `call'
plugins/discourse-prometheus/lib/middleware/metrics.rb:14:in `call'
rack-mini-profiler (3.3.1) lib/mini_profiler.rb:334:in `call'
lib/middleware/processing_request.rb:12:in `call'
message_bus (4.3.9) lib/message_bus/rack/middleware.rb:60:in `call'
lib/middleware/request_tracker.rb:385:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/remote_ip.rb:96:in `call'
railties (7.2.2.1) lib/rails/engine.rb:535:in `call'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `public_send'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `method_missing'
rack (2.2.11) lib/rack/urlmap.rb:74:in `block in call'
rack (2.2.11) lib/rack/urlmap.rb:58:in `each'
rack (2.2.11) lib/rack/urlmap.rb:58:in `call'
unicorn (6.1.0) lib/unicorn/http_server.rb:634:in `process_client'
unicorn (6.1.0) lib/unicorn/http_server.rb:739:in `worker_loop'
unicorn (6.1.0) lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
unicorn (6.1.0) lib/unicorn/http_server.rb:143:in `start'
unicorn (6.1.0) bin/unicorn:128:in `<top (required)>'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'

Viewing the log errors are:

Embeddings Model

Error message in logs: (every 5 mins) Connection reset by peer (Faraday::ConnectionFailed)

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::EmbeddingsBackfill

Embeddings model error Stack trace

Embeddings model error Stack trace
Job exception: 5 errors
Connection reset by peer (Faraday::ConnectionFailed)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:570:in `inflater'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:363:in `read_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:401:in `body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:321:in `reading_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2430:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profil...
Backtrace
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `raise' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `wait_until_resolved!' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:998:in `value!' 
/var/www/discourse/plugins/discourse-ai/lib/embeddings/vector.rb:50:in `gen_bulk_reprensentations' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:134:in `block in populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each_slice' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:36:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
/var/www/discourse/app/jobs/base.rb:379:in `perform' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:137:in `process_queue' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:77:in `worker_loop' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:63:in `block (2 levels) in ensure_worker_threads' 

Inference LLM Model

Error message in logs: Job exception: Connection reset by peer

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::CreateAiReply

LLM model error Stack trace

LLM model error Stack trace
Message
Job exception: Connection reset by peer
Backtrace
openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock' 
openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock' 
net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill' 
net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil' 
net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline' 
net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked' 
net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:570:in `inflater' 
net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:363:in `read_body' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:2433:in `block in transport_request' 
net-http-0.6.0/lib/net/http/response.rb:320:in `reading_body' 
net-http-0.6.0/lib/net/http.rb:2430:in `transport_request' 
net-http-0.6.0/lib/net/http.rb:2384:in `request' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):5:in `request'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:1632:in `start' 
net-http-0.6.0/lib/net/http.rb:1070:in `start' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/llm.rb:281:in `generate' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/bot.rb:65:in `get_updated_title' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:252:in `title_playground' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `ensure in reply_to' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `reply_to' 
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/create_ai_reply.rb:18:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:202:in `execute_job' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:170:in `block (2 levels) in process' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:177:in `block in invoke' 
/var/www/discourse/lib/sidekiq/pausable.rb:132:in `call' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:179:in `block in invoke' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:182:in `invoke' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:169:in `block in process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:136:in `block (6 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:113:in `local' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:135:in `block (5 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq.rb:44:in `block in <module:Sidekiq>' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:131:in `block (4 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:263:in `stats' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:126:in `block (3 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:13:in `call' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:125:in `block (2 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:80:in `global' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:124:in `block in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:39:in `prepare' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:123:in `dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:168:in `process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:78:in `process_one' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:68:in `run' 
sidekiq-6.5.12/lib/sidekiq/component.rb:8:in `watchdog' 
sidekiq-6.5.12/lib/sidekiq/component.rb:17:in `block in safe_thread' 

Any ideas, suggestions, etc would be most welcome

1 me gusta

Hemos realizado más de 2000 solicitudes a la API de OpenAI en este sitio en el último mes, y no tenemos errores como ese en nuestros registros.

Entré en otro sitio que alojamos y que utiliza intensivamente OpenAI, con 80.000 solicitudes el mes pasado y tienen cero errores de Faraday, y solo dos errores de “Connection Reset by Peer” en el período.

¿Es posible que haya algo mal en la red de su servidor? Tuve Errno::ECONNRESET (Connection reset by peer) una vez debido a un controlador NIC defectuoso.

Lo investigaré.

1 me gusta

Como usted escribió, originalmente pensé que se trataba de un problema de la pila de red; sin embargo, las llamadas repetidas a la API de OpenAI desde el mismo contenedor funcionan bien.

Esto también es con una compilación muy reciente, commits hasta el 21 de febrero.
Para demostrar esto (a costa de consumir tokens), preparé un script rápido para verificar la pila de red de OpenAI.

  • Se ejecuta durante 600 segundos (10 minutos)
  • Realiza una llamada de finalización de chat por segundo
  • Cambia el prompt para evitar la caché

Ejecute dentro del contenedor ./launcher enter app, guarde el siguiente script, hágalo ejecutable con chmod +x test_openai.sh y luego llámelo con OPENAI_API_KEY=.... ./test_openai.sh

test_openai.sh
#!/bin/bash

# Duración de la ejecución
DURATION_SECS=600

# Inicializar contadores
successful=0
unsuccessful=0
declare -A error_messages

# Función para calcular el porcentaje
calc_percentage() {
    local total=$(($1 + $2))
    if [ $total -eq 0 ]; then
        echo "0.00"
    else
        echo "scale=2; ($2 * 100) / $total" | bc
    fi
}

# Función para imprimir estadísticas
print_stats() {
    local percent=$(calc_percentage $successful $unsuccessful)
    echo "-------------------"
    echo "Llamadas exitosas: $successful"
    echo "Llamadas fallidas: $unsuccessful"
    echo "Tasa de fallos: ${percent}%"
    echo "Mensajes de error:"
    for error in "${!error_messages[@]}"; do
        echo "  - $error (${error_messages[$error]} veces)"
    done
}

end_time=$((SECONDS + DURATION_SECS))

counter=1
while [ $SECONDS -lt $end_time ]; do
    # Realizar la llamada a la API con tiempo de espera
    response=$(curl -s -w "\n%{http_code}" \
        -X POST \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -d "{
            \"model\": \"gpt-4o-mini\",
            \"messages\": [{\"role\": \"user\", \"content\": \"Use this number to choose a one word response: $counter\"}]
        }" \
        --connect-timeout 5 \
        --max-time 10 \
        https://api.openai.com/v1/chat/completions 2>&1)

    # Obtener la última línea (código de estado) y el cuerpo de la respuesta
    http_code=$(echo "$response" | tail -n1)
    body=$(echo "$response" | sed '$d')

    # Verificar si la llamada fue exitosa
    if [ "$http_code" = "200" ]; then
        ((successful++))
    else
        ((unsuccessful++))
        # Extraer mensaje de error
        error_msg=$(echo "$body" | grep -o '"message":"[^"]*"' | cut -d'"' -f4)
        if [ -z "$error_msg" ]; then
            error_msg="Connection error: $body"
        fi
        # Incrementar el contador del mensaje de error
        ((error_messages["$error_msg"]++))
    fi

    # Imprimir estadísticas actuales
    print_stats

    ((counter++))
    
    # Esperar 1 segundo antes de la próxima llamada
    sleep 1
done

Para el script de prueba, mi tasa de fallos fue inferior al 0.5%, lo cual es aceptable a esa escala.
Esto me dice que el problema reside en el software Discourse y no en el contenedor o la pila de red que lo impulsa.
Si no se ha solucionado con una confirmación reciente, lo investigaré más a fondo.

2 Me gusta

Corregí la regresión en torno a o1-mini y o1-preview aquí:

Sin embargo, estoy confundido acerca de los problemas de SSL, no cambiamos nada en nuestra biblioteca subyacente aquí.

Tal vez esto esté relacionado con la transmisión, ¿intenta deshabilitar la transmisión en su LLM de OpenAI y ve si esto se resuelve solo? tu prueba allí está usando gpt-4o-mini sin usar transmisión.

3 Me gusta

¡Eso es genial! ¡Bien hecho!

En el proceso de diagnóstico, he encontrado otro error: en la página de configuración de LLM (/admin/plugins/discourse-ai/ai-llms/%/edit), seleccionar cualquiera de las opciones para "Deshabilitar el soporte de herramientas nativas (usar herramientas basadas en XML) (opcional)" o "Deshabilitar las finalizaciones de streaming (convertir solicitudes de streaming a no streaming)" y hacer clic en Guardar muestra una tostada temporal de "¡Éxito!", pero luego al recargar la página, ambas o alguna de las opciones aparecen desmarcadas.

Los problemas de reinicio de conexión persisten y todavía estoy investigando, sin embargo, parece ser una combinación del código Ruby (FinalDestination / resolución DNS / Faraday) que maneja los sockets, combinado con un contenedor Debian 12 en una VM Ubuntu 24.04.

He iniciado una VM de prueba Ubuntu 22.04 y no hay problemas, todas las incrustaciones y la inferencia funcionan perfectamente. No he visto ni un solo reinicio.

Seguiré trabajando en ello, quizás esté relacionado con una forma novedosa en que Ubuntu 24.04 gestiona la pila TCP con netplan.

2 Me gusta

Gracias, he solucionado esto hoy con la persistencia. ¿Puedes actualizar e intentarlo de nuevo?

3 Me gusta

Okay, una pequeña actualización: no pudimos establecer una conexión directa a la API de OpenAI en el rango de IP corporativo. Cloudflare enviaba paquetes RST aproximadamente 1 ms después de TLS.

Así que configuramos una Cloudflare AI Gateway como un reemplazo directo de URL para el punto final de la API de OpenAI y funciona perfectamente con la configuración de LLM.

Parece que Cloudflare tiene una política de límite de velocidad no documentada para rangos de IP desconocidos (es decir, no Azure, AWS, GCP, etc.) que se activa. El grupo de 100 conexiones para Embeddings activaría ese límite.

Aparte, Cloudflare tiene una función de Authenticated Gateway que agrega un token de encabezado especial.

De su documentación:

curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai/chat/completions \
  --header 'cf-aig-authorization: Bearer {CF_AIG_TOKEN}' \
  --header 'Authorization: Bearer OPENAI_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{"model": "gpt-4o" .......

Sería genial si hubiera una función para agregar encabezados por LLM en la pantalla de configuración de LLM.

De esa manera, podríamos agregar la clave y el valor cf-aig-authorization al LLM para cada llamada que hagamos.

Este es un caso difícil, es mucha interfaz de usuario para un caso límite.

¿Hay alguna posibilidad de que puedas probar openrouter.ai? ¿Podría resolver este problema también?

No estoy categóricamente en contra de permitir encabezados arbitrarios, pero es una configuración muy avanzada. Quizás si estuviera detrás de una configuración de sitio oculta estaría bien (configuración de sitio para habilitar la interfaz de usuario avanzada).

¿Es esta una contribución que tu empresa podría ayudar a impulsar para el plugin de código abierto?

1 me gusta

No he podido obtener la aprobación para una contribución, sin embargo seguiremos intentándolo. ¡Gracias por la ayuda hasta ahora!