Discourse AI causando novos erros de SSL e Connection Reset by Peer

Priority/Severity:
Recent repository changes make Discourse AI mostly unworkable with current OpenAI API.

Platform:

  • Self-hosted, using standard standalone build
  • Ubuntu 24.04 host VM, Docker containers
  • OpenAI API
  • Anthropic API

Description:

Discourse AI has been calling an external API - OpenAI with models and working well as of Feb 15 (last container rebuild). Today (Feb 21) I rebuilt the container and things are not working.

Here’s what I know:

As at Feb 15
OpenAI models configured and working well:

  • LLM/Persona
    • GPT4 Omni
    • GPT4 Omni Mini
  • Embeddings
    • text-embedding-ada-002

As at Feb 21

All OpenAI models have about 70-80% error rate for LLM calls with the error message “Connection Reset by Peer”. Some chats go through, some fail partway in between. Embedding calls fail with an Faraday::ConnectionFailed SSL error.

Additional OpenAI models fail:

  • o1-mini and o1-preview fail to test/save the LLM with a code error (‘developer’ is not a valid role) because developer role is only valid for o1 and o3 models, not their -mini versions. Source code github.com/discourse/discourse-ai/…/chat_gpt.rb:61 needs to be updated to do an exact model name match not a starts_with match. For the else case on line 73, there is no longer a system user and needs to be updated to be simply user. As of today, o1-mini cannot use tools.

Have tried:

  • Checked OpenAI platform limits and we are well under Rate Limits and OpenAI account is funded.
  • Rebuilding container
  • Deleting and recreating LLM personas and users
  • Deleting and creating LLM models
  • Creating new API token keys
  • Ensuring SSL and certificates were updated inside container
  • Logging into container and using bash & curl to call API (successfully)
  • Logging into rails console RAILS_ENV=production bundle exec rails console and using a http object to call the OpenAI API (successfully)
  • Anthropic API calls to claude-3.5-sonnet (successfully)

Reproducible steps:

Create new container build using the latest Discourse and add the Discourse AI plugin to plugins:

  ...
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/discourse-ai.git

Configure OpenAI LLM and Embeddings models with the following:

  • GPT4 Omni, GPT4 Omni Mini
    • All default values, insert your api key
    • Tokens: 64000
    • Click “Run test”, wait for response, sometimes success, often “Internal Server Error”. When successful, trying to chat with the persona gives the Inference LLM Model stack trace
  • text-embedding-ada-002, text-embedding-3-large
    • Saves successfully, generates error logs, multiple repeating every 5 mins

Internal Server Error Stack Trace

Internal Server Error stack trace
Message (2 copies reported)
Errno::ECONNRESET (Connection reset by peer)
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
lib/middleware/processing_request.rb:12:in `call'
lib/middleware/request_tracker.rb:385:in `call'
Backtrace
openssl (3.3.0) lib/openssl/buffering.rb:217:in `sysread_nonblock'
openssl (3.3.0) lib/openssl/buffering.rb:217:in `read_nonblock'
net-protocol (0.2.2) lib/net/protocol.rb:218:in `rbuf_fill'
net-protocol (0.2.2) lib/net/protocol.rb:199:in `readuntil'
net-protocol (0.2.2) lib/net/protocol.rb:209:in `readline'
net-http (0.6.0) lib/net/http/response.rb:625:in `read_chunked'
net-http (0.6.0) lib/net/http/response.rb:595:in `block in read_body_0'
net-http (0.6.0) lib/net/http/response.rb:570:in `inflater'
net-http (0.6.0) lib/net/http/response.rb:593:in `read_body_0'
net-http (0.6.0) lib/net/http/response.rb:363:in `read_body'
plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response'
plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!'
net-http (0.6.0) lib/net/http.rb:2433:in `block in transport_request'
net-http (0.6.0) lib/net/http/response.rb:320:in `reading_body'
net-http (0.6.0) lib/net/http.rb:2430:in `transport_request'
net-http (0.6.0) lib/net/http.rb:2384:in `request'
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:44:in `step' 
rack-mini-profiler (3.3.1) lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):12:in `request'
plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!'
net-http (0.6.0) lib/net/http.rb:1632:in `start'
net-http (0.6.0) lib/net/http.rb:1070:in `start'
plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!'
plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!'
plugins/discourse-ai/lib/completions/llm.rb:281:in `generate'
plugins/discourse-ai/lib/configuration/llm_validator.rb:36:in `run_test'
plugins/discourse-ai/app/controllers/discourse_ai/admin/ai_llms_controller.rb:128:in `test'
actionpack (7.2.2.1) lib/action_controller/metal/basic_implicit_render.rb:8:in `send_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:226:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rendering.rb:193:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:261:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:121:in `block in run_callbacks'
app/controllers/application_controller.rb:427:in `block in with_resolved_locale'
i18n (1.14.7) lib/i18n.rb:353:in `with_locale'
app/controllers/application_controller.rb:427:in `with_resolved_locale'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:130:in `block in run_callbacks'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:141:in `run_callbacks'
actionpack (7.2.2.1) lib/abstract_controller/callbacks.rb:260:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/rescue.rb:27:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:77:in `block in process_action'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `block in instrument'
activesupport (7.2.2.1) lib/active_support/notifications/instrumenter.rb:58:in `instrument'
activesupport (7.2.2.1) lib/active_support/notifications.rb:210:in `instrument'
actionpack (7.2.2.1) lib/action_controller/metal/instrumentation.rb:76:in `process_action'
actionpack (7.2.2.1) lib/action_controller/metal/params_wrapper.rb:259:in `process_action'
activerecord (7.2.2.1) lib/active_record/railties/controller_runtime.rb:39:in `process_action'
actionpack (7.2.2.1) lib/abstract_controller/base.rb:163:in `process'
actionview (7.2.2.1) lib/action_view/rendering.rb:40:in `process'
rack-mini-profiler (3.3.1) lib/mini_profiler/profiling_methods.rb:115:in `block in profile_method' 
actionpack (7.2.2.1) lib/action_controller/metal.rb:252:in `dispatch'
actionpack (7.2.2.1) lib/action_controller/metal.rb:335:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:67:in `dispatch'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:50:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:32:in `block in <class:Constraints>'
actionpack (7.2.2.1) lib/action_dispatch/routing/mapper.rb:62:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:53:in `block in serve'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:133:in `block in find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `each'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:126:in `find_routes'
actionpack (7.2.2.1) lib/action_dispatch/journey/router.rb:34:in `serve'
actionpack (7.2.2.1) lib/action_dispatch/routing/route_set.rb:896:in `call'
lib/middleware/omniauth_bypass_middleware.rb:35:in `call'
rack (2.2.11) lib/rack/tempfile_reaper.rb:15:in `call'
rack (2.2.11) lib/rack/conditional_get.rb:27:in `call'
rack (2.2.11) lib/rack/head.rb:12:in `call'
actionpack (7.2.2.1) lib/action_dispatch/http/permissions_policy.rb:38:in `call'
lib/content_security_policy/middleware.rb:12:in `call'
lib/middleware/anonymous_cache.rb:409:in `call'
lib/middleware/csp_script_nonce_injector.rb:12:in `call'
config/initializers/008-rack-cors.rb:26:in `call'
rack (2.2.11) lib/rack/session/abstract/id.rb:266:in `context'
rack (2.2.11) lib/rack/session/abstract/id.rb:260:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/cookies.rb:704:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:31:in `block in call'
activesupport (7.2.2.1) lib/active_support/callbacks.rb:101:in `run_callbacks'
actionpack (7.2.2.1) lib/action_dispatch/middleware/callbacks.rb:30:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/debug_exceptions.rb:31:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/show_exceptions.rb:32:in `call'
logster (2.20.1) lib/logster/middleware/reporter.rb:40:in `call'
railties (7.2.2.1) lib/rails/rack/logger.rb:41:in `call_app'
railties (7.2.2.1) lib/rails/rack/logger.rb:29:in `call'
config/initializers/100-quiet_logger.rb:20:in `call'
config/initializers/100-silence_logger.rb:29:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/request_id.rb:33:in `call'
lib/middleware/enforce_hostname.rb:24:in `call'
rack (2.2.11) lib/rack/method_override.rb:24:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/executor.rb:16:in `call'
rack (2.2.11) lib/rack/sendfile.rb:110:in `call'
plugins/discourse-prometheus/lib/middleware/metrics.rb:14:in `call'
rack-mini-profiler (3.3.1) lib/mini_profiler.rb:334:in `call'
lib/middleware/processing_request.rb:12:in `call'
message_bus (4.3.9) lib/message_bus/rack/middleware.rb:60:in `call'
lib/middleware/request_tracker.rb:385:in `call'
actionpack (7.2.2.1) lib/action_dispatch/middleware/remote_ip.rb:96:in `call'
railties (7.2.2.1) lib/rails/engine.rb:535:in `call'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `public_send'
railties (7.2.2.1) lib/rails/railtie.rb:226:in `method_missing'
rack (2.2.11) lib/rack/urlmap.rb:74:in `block in call'
rack (2.2.11) lib/rack/urlmap.rb:58:in `each'
rack (2.2.11) lib/rack/urlmap.rb:58:in `call'
unicorn (6.1.0) lib/unicorn/http_server.rb:634:in `process_client'
unicorn (6.1.0) lib/unicorn/http_server.rb:739:in `worker_loop'
unicorn (6.1.0) lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
unicorn (6.1.0) lib/unicorn/http_server.rb:143:in `start'
unicorn (6.1.0) bin/unicorn:128:in `<top (required)>'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'

Viewing the log errors are:

Embeddings Model

Error message in logs: (every 5 mins) Connection reset by peer (Faraday::ConnectionFailed)

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::EmbeddingsBackfill

Embeddings model error Stack trace

Embeddings model error Stack trace
Job exception: 5 errors
Connection reset by peer (Faraday::ConnectionFailed)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:570:in `inflater'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:363:in `read_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:401:in `body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http/response.rb:321:in `reading_body'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2430:in `transport_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/net-http-0.6.0/lib/net/http.rb:2384:in `request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profil...
Backtrace
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `raise' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1268:in `wait_until_resolved!' 
concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:998:in `value!' 
/var/www/discourse/plugins/discourse-ai/lib/embeddings/vector.rb:50:in `gen_bulk_reprensentations' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:134:in `block in populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `each_slice' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:133:in `populate_topic_embeddings' 
/var/www/discourse/plugins/discourse-ai/app/jobs/scheduled/embeddings_backfill.rb:36:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
/var/www/discourse/app/jobs/base.rb:379:in `perform' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:137:in `process_queue' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:77:in `worker_loop' 
mini_scheduler-0.18.0/lib/mini_scheduler/manager.rb:63:in `block (2 levels) in ensure_worker_threads' 

Inference LLM Model

Error message in logs: Job exception: Connection reset by peer

application_version: 00907363d4b290df1c755df1a2494b95265e40b4

job: Jobs::CreateAiReply

LLM model error Stack trace

LLM model error Stack trace
Message
Job exception: Connection reset by peer
Backtrace
openssl-3.3.0/lib/openssl/buffering.rb:217:in `sysread_nonblock' 
openssl-3.3.0/lib/openssl/buffering.rb:217:in `read_nonblock' 
net-protocol-0.2.2/lib/net/protocol.rb:218:in `rbuf_fill' 
net-protocol-0.2.2/lib/net/protocol.rb:199:in `readuntil' 
net-protocol-0.2.2/lib/net/protocol.rb:209:in `readline' 
net-http-0.6.0/lib/net/http/response.rb:625:in `read_chunked' 
net-http-0.6.0/lib/net/http/response.rb:595:in `block in read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:570:in `inflater' 
net-http-0.6.0/lib/net/http/response.rb:593:in `read_body_0' 
net-http-0.6.0/lib/net/http/response.rb:363:in `read_body' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:374:in `non_streaming_response' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:160:in `block (2 levels) in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:2433:in `block in transport_request' 
net-http-0.6.0/lib/net/http/response.rb:320:in `reading_body' 
net-http-0.6.0/lib/net/http.rb:2430:in `transport_request' 
net-http-0.6.0/lib/net/http.rb:2384:in `request' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:19:in `block in request_with_mini_profiler' 
rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:50:in `step' 
rack-mini-profiler-3.3.1/lib/patches/net_patches.rb:18:in `request_with_mini_profiler' 
(eval at /var/www/discourse/lib/method_profiler.rb:38):5:in `request'
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:122:in `block in perform_completion!' 
net-http-0.6.0/lib/net/http.rb:1632:in `start' 
net-http-0.6.0/lib/net/http.rb:1070:in `start' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/base.rb:105:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/endpoints/open_ai.rb:44:in `perform_completion!' 
/var/www/discourse/plugins/discourse-ai/lib/completions/llm.rb:281:in `generate' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/bot.rb:65:in `get_updated_title' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:252:in `title_playground' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `ensure in reply_to' 
/var/www/discourse/plugins/discourse-ai/lib/ai_bot/playground.rb:561:in `reply_to' 
/var/www/discourse/plugins/discourse-ai/app/jobs/regular/create_ai_reply.rb:18:in `execute' 
/var/www/discourse/app/jobs/base.rb:316:in `block (2 levels) in perform' 
rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
/var/www/discourse/app/jobs/base.rb:303:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:299:in `each' 
/var/www/discourse/app/jobs/base.rb:299:in `perform' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:202:in `execute_job' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:170:in `block (2 levels) in process' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:177:in `block in invoke' 
/var/www/discourse/lib/sidekiq/pausable.rb:132:in `call' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:179:in `block in invoke' 
sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:182:in `invoke' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:169:in `block in process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:136:in `block (6 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:113:in `local' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:135:in `block (5 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq.rb:44:in `block in <module:Sidekiq>' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:131:in `block (4 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:263:in `stats' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:126:in `block (3 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:13:in `call' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:125:in `block (2 levels) in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_retry.rb:80:in `global' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:124:in `block in dispatch' 
sidekiq-6.5.12/lib/sidekiq/job_logger.rb:39:in `prepare' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:123:in `dispatch' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:168:in `process' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:78:in `process_one' 
sidekiq-6.5.12/lib/sidekiq/processor.rb:68:in `run' 
sidekiq-6.5.12/lib/sidekiq/component.rb:8:in `watchdog' 
sidekiq-6.5.12/lib/sidekiq/component.rb:17:in `block in safe_thread' 

Any ideas, suggestions, etc would be most welcome

1 curtida

Fizemos mais de 2 mil requisições para a API da OpenAI neste site no último mês, e não temos erros como esse em nossos logs.

Entrei em outro site que hospedamos, que usa intensamente a OpenAI, com 80 mil requisições no último mês e eles têm zero erros Faraday, e apenas dois erros “Connection Reset by Peer” no período.

É possível que haja algo errado na rede do seu servidor? Tive Errno::ECONNRESET (Connection reset by peer) uma vez por causa de um driver de NIC defeituoso.

Vou verificar isso.

1 curtida

Como você escreveu, inicialmente pensei que fosse um problema na pilha de rede, no entanto, chamadas repetidas da API OpenAI do próprio contêiner funcionam bem.

Isso também é com uma compilação muito recente, commits de 21 de fevereiro.

Para provar isso (ao custo de consumir tokens), criei um script rápido para verificar a pilha de rede da OpenAI.

  • Executa por 600 segundos (10 minutos)
  • Faz uma chamada de conclusão de chat por segundo
  • Altera o prompt para evitar cache

Execute dentro do contêiner ./launcher enter app, salve o script abaixo, torne-o executável com chmod +x test_openai.sh e, em seguida, chame-o com OPENAI_API_KEY=.... ./test_openai.sh

test_openai.sh
#!/bin/bash

# Duração para executar
DURATION_SECS=600

# Inicializar contadores
successful=0
unsuccessful=0
declare -A error_messages

# Função para calcular porcentagem
calc_percentage() {
    local total=$(($1 + $2))
    if [ $total -eq 0 ]; then
        echo "0.00"
    else
        echo "scale=2; ($2 * 100) / $total" | bc
    fi
}

# Função para imprimir estatísticas
print_stats() {
    local percent=$(calc_percentage $successful $unsuccessful)
    echo "-------------------"
    echo "Chamadas bem-sucedidas: $successful"
    echo "Chamadas com falha: $unsuccessful"
    echo "Taxa de falha: ${percent}%"
    echo "Mensagens de erro:"
    for error in "${!error_messages[@]}"; do
        echo "  - $error (${error_messages[$error]} vezes)"
    done
}

end_time=$((SECONDS + DURATION_SECS))

counter=1
while [ $SECONDS -lt $end_time ]; do
    # Fazer a chamada da API com timeout
    response=$(curl -s -w "\n%{http_code}" \
        -X POST \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $OPENAI_API_KEY" \
        -d "{
            \"model\": \"gpt-4o-mini\",
            \"messages\": [{\"role\": \"user\", \"content\": \"Use this number to choose a one word response: $counter\"}]
        }" \
        --connect-timeout 5 \
        --max-time 10 \
        https://api.openai.com/v1/chat/completions 2>&1)

    # Obter a última linha (código de status) e o corpo da resposta
    http_code=$(echo "$response" | tail -n1)
    body=$(echo "$response" | sed '$d')

    # Verificar se a chamada foi bem-sucedida
    if [ "$http_code" = "200" ]; then
        ((successful++))
    else
        ((unsuccessful++))
        # Extrair mensagem de erro
        error_msg=$(echo "$body" | grep -o '"message":"[^"]*"' | cut -d'"' -f4)
        if [ -z "$error_msg" ]; then
            error_msg="Connection error: $body"
        fi
        # Incrementar contador de mensagem de erro
        ((error_messages["$error_msg"]++))
    fi

    # Imprimir estatísticas atuais
    print_stats

    ((counter++))
    
    # Esperar 1 segundo antes da próxima chamada
    sleep 1
done

Para o script de teste, minha taxa de falha foi inferior a 0,5%, o que é aceitável nessa escala.

Isso me diz que o problema está no software Discourse e não no contêiner ou na pilha de rede que o alimenta.

Se não foi corrigido com um commit recente, darei uma olhada mais a fundo.

2 curtidas

Corrigi a regressão em torno de o1-mini e o1-preview aqui:

No entanto, estou confuso sobre os problemas de SSL, pois não alteramos nada em nossa biblioteca subjacente aqui.

Talvez isso esteja relacionado ao streaming, tente desativar o streaming em seu LLM da OpenAI e veja se isso se resolve? Seu teste lá está usando gpt-4o-mini sem usar streaming.

3 curtidas

Isso é incrível! Bom trabalho!

No processo de diagnóstico, encontrei outro bug - na página de configuração do LLM (/admin/plugins/discourse-ai/ai-llms/%/edit), selecionar qualquer uma das opções para “Desabilitar suporte nativo de ferramentas (usar ferramentas baseadas em XML) (opcional)” ou “Desabilitar conclusões de streaming (converter requisições de streaming para não streaming)” e clicar em Salvar exibe uma notificação temporária de “Sucesso!”, mas ao recarregar a página, ambas ou qualquer uma das opções estará desmarcada.

Os problemas de reset de conexão ainda persistem e ainda estou investigando, no entanto, parece ser uma combinação do código Ruby (FinalDestination / Resolução de DNS / Faraday) de manipulação de sockets, combinado com o container Debian 12 em uma VM Ubuntu 24.04.

Eu criei uma VM de teste Ubuntu 22.04 e não há problemas, todos os embeddings e inferência funcionam perfeitamente. Não vi um único reset.

Continuarei a trabalhar nisso, talvez esteja relacionado a uma nova forma como o Ubuntu 24.04 gerencia a pilha TCP com netplan.

2 curtidas

Obrigado, resolvi isso hoje a persistência, você pode atualizar e tentar novamente

3 curtidas

Ok, uma pequena atualização - não conseguimos fazer a conexão direta da API OpenAI funcionar no intervalo de IP corporativo. O Cloudflare enviava pacotes RST cerca de 1ms após o TLS.

Então, configuramos um Cloudflare AI Gateway como uma substituição de URL para o endpoint da API OpenAI e ele funciona perfeitamente com a configuração do LLM.

Parece que o Cloudflare tem uma política de limite de taxa não documentada para intervalos de IP desconhecidos (ou seja, não Azure, AWS, GCP, etc.) que entra em vigor. O pool de 100 conexões para Embeddings atingiria esse limite.

Como observação, o Cloudflare tem um recurso Authenticated Gateway que adiciona um token de cabeçalho especial.

Da documentação deles:

curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai/chat/completions \
  --header 'cf-aig-authorization: Bearer {CF_AIG_TOKEN}' \
  --header 'Authorization: Bearer OPENAI_TOKEN' \
  --header 'Content-Type: application/json' \
  --data '{"model": "gpt-4o" .......

Seria ótimo se houvesse um recurso para adicionar cabeçalhos por LLM na tela de configuração do LLM.

Dessa forma, poderíamos adicionar a chave e o valor cf-aig-authorization ao LLM para cada chamada que fizermos.

Esta é uma tarefa difícil, é muita interface do usuário para lidar com um caso de uso específico.

Alguma chance de você tentar o openrouter.ai, pois isso também pode resolver este problema?

Não sou categoricamente contra permitir cabeçalhos arbitrários, mas é uma configuração muito avançada. Talvez se estivesse por trás de uma configuração oculta do site, estaria tudo bem (configuração do site para habilitar a interface do usuário avançada).

Esta é uma contribuição que sua empresa poderia ajudar a impulsionar para o plugin de código aberto?

1 curtida

Ainda não consegui obter aprovação para uma contribuição, mas continuaremos tentando. Obrigado pela ajuda até agora!