Auto-hospedagem do Discourse na China Continental sob o GFW

Contexto

A documentação de auto-hospedagem para Discourse é excelente, e o processo de instalação ocorreu sem problemas. No entanto, encontrei problemas ao implantar meu servidor na China continental, atrás do Great Firewall.

Problema

Sempre que executo ./launcher rebuild app, o processo falha devido a restrições de rede. Abaixo estão as principais mensagens de erro:

I, [2025-01-14T10:21:45.402169 #1]  INFO -- : Scope: all 17 workspace projects
Lockfile is up to date, resolution step is skipped
Progress: resolved 1, reused 0, downloaded 0, added 0
Packages: +87 -15
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++------------
Progress: resolved 87, reused 46, downloaded 0, added 33
Progress: resolved 87, reused 46, downloaded 18, added 53
Progress: resolved 87, reused 46, downloaded 25, added 61
Progress: resolved 87, reused 46, downloaded 29, added 66
Progress: resolved 87, reused 46, downloaded 30, added 67
Progress: resolved 87, reused 46, downloaded 30, added 68
 WARN  GET https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.9.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/lefthook-linux-x64/-/lefthook-linux-x64-1.10.4.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/@embroider/compat/-/compat-3.8.0.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/@fortawesome/fontawesome-free/-/fontawesome-free-6.7.2.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/ace-builds/-/ace-builds-1.37.4.tgz error (ECONNRESET). Will retry in 10 seconds. 2 retries left.
 WARN  GET https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.9.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 WARN  GET https://registry.npmjs.org/typescript/-/typescript-5.7.3.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 WARN  GET https://registry.npmjs.org/lefthook-linux-x64/-/lefthook-linux-x64-1.10.4.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 WARN  GET https://registry.npmjs.org/@embroider/compat/-/compat-3.8.0.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 WARN  GET https://registry.npmjs.org/@fortawesome/fontawesome-free/-/fontawesome-free-6.7.2.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 WARN  GET https://registry.npmjs.org/ace-builds/-/ace-builds-1.37.4.tgz error (ERR_SOCKET_TIMEOUT). Will retry in 1 minute. 1 retries left.
 ERR_SOCKET_TIMEOUT  request to https://registry.npmjs.org/@embroider/compat/-/compat-3.8.0.tgz failed, reason: Socket timeout

FetchError: request to https://registry.npmjs.org/@embroider/compat/-/compat-3.8.0.tgz failed, reason: Socket timeout
    at ClientRequest.<anonymous> (/usr/lib/node_modules/pnpm/dist/pnpm.cjs:66979:18)
    at ClientRequest.emit (node:events:517:28)
    at TLSSocket.socketErrorListener (node:_http_client:501:9)
    at TLSSocket.emit (node:events:529:35)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

I, [2025-01-14T10:21:45.402602 #1]  INFO -- : Terminating async processes
I, [2025-01-14T10:21:45.402652 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 39
107:signal-handler (1736850105) Received SIGTERM scheduling shutdown...
I, [2025-01-14T10:21:45.402684 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 107
2025-01-14 10:21:45.402 UTC [39] LOG:  received fast shutdown request
2025-01-14 10:21:45.404 UTC [39] LOG:  aborting any active transactions
2025-01-14 10:21:45.406 UTC [39] LOG:  background worker "logical replication launcher" (PID 54) exited with exit code 1
2025-01-14 10:21:45.407 UTC [39] LOG:  shutting down
2025-01-14 10:21:45.431 UTC [39] LOG:  database system is shut down
107:M 14 Jan 2025 10:21:45.436 # User requested shutdown...
107:M 14 Jan 2025 10:21:45.436 * Saving the final RDB snapshot before exiting.
107:M 14 Jan 2025 10:21:45.441 * DB saved on disk
107:M 14 Jan 2025 10:21:45.441 # Redis is now ready to exit, bye bye...


FAILED
--------------------
Pups::ExecError: cd /var/www/discourse &amp;&amp; if [ -f yarn.lock ]; then
  if [ -d node_modules/.pnpm ]; then
    echo "This version of Discourse uses yarn, but pnpm node_modules are preset. Cleaning up..."
    find ./node_modules ./app/assets/javascripts/*/node_modules -mindepth 1 -maxdepth 1 -exec rm -rf {} +
  fi
  su discourse -c 'yarn install --frozen-lockfile &amp;&amp; yarn cache clean'
else
  su discourse -c 'CI=1 pnpm install --frozen-lockfile &amp;&amp; pnpm prune'
fi failed with return #&lt;Process::Status: pid 301 exit 1&gt;
Location of failure: /usr/local/lib/ruby/gems/3.3.0/gems/pups-1.2.1/lib/pups/exec_command.rb:132:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"yarn", "cmd"=>["if [ -f yarn.lock ]; then\n  if [ -d node_modules/.pnpm ]; then\n    echo \"This version of Discourse uses yarn, but pnpm node_modules are preset. Cleaning up...\"\n    find ./node_modules ./app/assets/javascripts/*/node_modules -mindepth 1 -maxdepth 1 -exec rm -rf {} +\n  fi\n  su discourse -c 'yarn install --frozen-lockfile &amp;&amp; yarn cache clean'\nelse\n  su discourse -c 'CI=1 pnpm install --frozen-lockfile &amp;&amp; pnpm prune'\nfi"]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
58bc76b977b8eb7c806f0308caacabe389605c6242852e8f17c30076b728de67

Parece que a falha é causada pela incapacidade de buscar pacotes ou dependências (por exemplo, do GitHub ou npm) devido a restrições de rede na China continental.

O que tentei

Usando um Proxy: Tentei configurar um servidor proxy usando Shell Clash e acessei o Google com sucesso (que normalmente é inacessível da China continental). Aqui está a saída do teste:

root@lavm-hypge0pc5w:/var/discourse# sudo wget google.com
--2025-01-14 18:35:22--  http://google.com/
Resolving google.com (google.com)... 198.18.0.5
Connecting to google.com (google.com)|198.18.0.5|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2025-01-14 18:35:22--  http://www.google.com/
Resolving www.google.com (www.google.com)... 198.18.0.6
Connecting to www.google.com (www.google.com)|198.18.0.6|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.2’

index.html.2                                       [ <=>                                                                                              ]  19.38K  --.-KB/s    in 0.05s   

2025-01-14 18:35:22 (372 KB/s) - ‘index.html.2’ saved [19841]

root@lavm-hypge0pc5w:/var/discourse# 

No entanto, após executar ./launcher rebuild app, ainda recebo o mesmo erro de antes.

Perguntas

Existe alguma maneira de implantar com sucesso o Discourse em um servidor localizado na China continental?

2 curtidas

Oi :slight_smile:

Você já deu uma olhada em 🇨🇳 Discourse Official Install Guide | Discourse 云平台安装? Ele cobre problemas típicos ao instalar o Discourse na China continental.

2 curtidas

Sim, eu vi esta postagem. É essencialmente uma tradução chinesa de INSTALL-cloud.md.

Meu problema foi resolvido adicionando a seguinte linha ao meu arquivo app.yaml:
- "templates/web.china.template.yml"

Se você estiver enfrentando um problema semelhante, este artigo pode ser útil: Guia de Instalação do Discourse.

Ele cobre partes específicas da China, como o problema que você enfrentou:

Eu perdi totalmente essa linha. Muito obrigado!

1 curtida

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.