wait… I got it I think… disabled proxy and rebuilt without the cloudflare template. Enabled the proxy again after and I can access wordpress and the forum with strict SSL!
You’d be ratelimited very easily if you chose not to use the cloudflare template…
Discourse will think many people are registering through same IP (cloudflare proxy) and start rate limiting them.
hmm… so better do it again without the letsencrypt email but the cloudflare template… do you know if I should leave the proxy enabled when I rebuild with the cloudflare template?
Thanks so much for all the help… I’m really more of a creative.
It’s my registrar, not just DNS… and I can’t afford to pay the huge sum they require for me to use a different DNS. So, oops.
Discourse setup doesn’t require an email now to enrol certificates, and hasn’t done for a while. Unless you amend the yml to disable SSL it will always default to HTTPS.
Using Cloudflare for DNS and turning the orange cloud on are totally different things. Using Cloudflare in the context of the above refers to their proxy and optimizations, which have done pretty screwy things in the past. Their DNS is fine, good even.
Your Discourse installation doesn’t need to change, you have HTTPS working and everything there now is fine. If SSL works and the CloudFlare template is enabled don’t touch it.
It sounds like the issue is now at WordPress, how do you have that installed? Is it just a VPS, or are you on a WordPress host of some form?
This is a very common configuration, I’m pretty confident it’s an easy fix.
This is a Very Bad Idea. Once Discourse has enrolled the certificate from Let’s Encrypt the renewals will succeed as they happen via a different mechanism. There’s no need to disable TLS between server and CDN.
Why do this in light of the above? Aside from creating additional local load processing UFW rules for all traffic, you just run the risk that that your rules fall out of date, it’s a quick way to a bunch of network errors. Cloudflare periodically brings new IP ranges online, the first you will hear of it is when your users can’t access the site. Let the certificate enrol, and if you do want to use CloudFlare just tune a page rule accordingly.
I use cloudflare in DNS-only mode, it’s straightforward. Just click and turn off the “orange cloud” in your DNS control panel, so the cloud is grey instead. That’s all you need to do.
That no longer works as it once did. If you don’t want let’s encrypt you have to configure by hand rather than with discourse-setup
So what certificate would discourse get in absence of a LE email? self generated one or a certificate issued with an arbitrary email?
In either case, it should still work fine with cloudflare ssl as they allow hosts with valid certificate in their full ssl config & above.
You’ll need to check the source, as I don’t quite remember. I think that it uses the admin email. If it’s check that the server is available on port 443 fails it’ll refuse to install.
I may be absolutely wrong here but per this
read_config "LETSENCRYPT_ACCOUNT_EMAIL"
local letsencrypt_account_email=$read_config_result
if [ -z $letsencrypt_account_email ]
then
letsencrypt_account_email="me@example.com"
fi
if [ "$letsencrypt_account_email" = "me@example.com" ]
then
local letsencrypt_status="ENTER to skip"
else
local letsencrypt_status="Enter 'OFF' to disable."
fi
seems like the default config check should give an option to enter “off” to disable letsencrypt? maybe I’m totally wrong and looking at the totally wrong place?
Disabling let’s encrypt isn’t the answer.
On a standard install, it isn’t.
in an advanced install (e.g. someone using a reverse proxy) it is totally an answer.
Could you elaborate as to why?
It’s easy to make the certificate scenario work. Even if you operate a second webserver on the same server and proxy locally it’s easy to make the certificate work, so why wouldn’t you?
Why would I request multiple certificates?
When I can just request one (unified) certificate from letsencrypt through the reverse proxy (e.g. nginx/caddy) why would I want a second certificate inside the discourse container when it won’t even be used by anything.
I think the general answer is to avoid the proxy complexity entirely, if you can
But it is unavoidable once you start looking at a deployment past the standard one or two container setup.
Your forum has to get really quite big to need something like Cloudflare’s proxy. It’s something to consider when your forum starts getting overwhelmed. Unless you’re migrating a large forum to Discourse, no one who is installing Discourse should be thinking about it.
I really don’t agree, getting HTTPS right is simple, in this case the user has two sites on two different servers under a single domain.
There’s no technical reason that both sites can’t be serving over HTTPS regardless of whether CloudFlare is activated. It’s easy to do once you understand the enrollment method used by Let’s Encrypt and how to tune CloudFlare for Discourse. CloudFlare has a full HTTPS mode, which is there for precisely this scenario. TLS from the server to their network, TLS from their network to clients, decrypted in-between so that they can cache and ‘optimise’, although with Discourse we know that last bit doesn’t work so well.
Mostly this, although there’s definitely benefit in having a page rule caching /uploads/ - it will help take the strain off for a while and make a lower end VPS last that bit longer.