There are commands like rake posts:rebake (i.e after migration).
Are there any others or something else that will speed up the forum? I have an impression that after changing the hosting with more RAM it is probably even slower… More GB didn’t help even though I’m checking without any traffic. I’m wondering if there are any commands that optimize the database, etc. because maybe that’s why moving through the links is terribly long and slow (1-2 sec) . It’s a bit disqualifying compared to nodebb, for example.
These settings are usually set automatically by discourse-setup based on the system specs (number of CPU cores, amount of RAM) at the time that script is run. It’s also safe to run it again if your server specs change.
There are many differences on AWS instance types. Some use EBS backed disks which go into the network for disk access making latency higher. Some have fast local NVME drives, but they don’t persist data. Also there is the Z and C instance type families with way faster data.
However, all that ends up more complicated and expensive than a Digital Ocean droplet, which have acceptable performance for small communities for $5 and offer a pretty fast CPU in their $40 CPU optimized offering.
Why would there be a secret command to make things fast? Why on earth would we default to a slow mode?
If you are having performance issues you need to bring hard data. What is the slow route, what is the community size, what is the database size, did you try removing all plugins and themes, did you try running in a $5 DO droplet, etc.
no intention to blame anyone, especially as i haven’t looked into it myself, particularly the postgres instance/config … but discourse is damn slow. not sure what’s responsible for that, i guess the ruby ORM has its part.
of course you can always add a bigger iron, more ssd’s, more ram … up to a certain point, but that doesnt change the main point: discourse is pretty demanding/slow, even for tiny installs it requires a decent host
I really disagree with this statement. I know a number of small communities who run on a $5 VPS. A slow discourse installation is usually indicative of poor host selection, or misconfiguration.
Remember that Discourse isn’t a website, it’s an application, once loaded in your browser the data being relayed back and forth is minimal.
If you follow the standard installation guide, which is the only installation we support here, then all of the tuning for CPU/RAM is done automagically. You haven’t given us any examples or comparators here, I would strongly encourage you to provide us with some specifics.
i can do some benchmarks, sure. I run it on a $5 virtualisation, with really minimal hardware for todays standards, i’m aware of that. And i wasn’t comparing it to other forum solutions, but i know what postgres can handle/deliver even when run in a docker container in a virtualisation, i have almost 20 years of experience in database development
okay okay, and i was a bit triggered by the “do you run it on hardware from the last century, aka spinning drives” attitude
i rephrase to “discourse is more demanding than simpler systems”
What do you consider to be adequate performance
Describe the scenarios that slow performance is occuring.
How have you determined that the discourse platform (or even your hosting) is the source of the slow performance. That the database is a limiting factor etc.
Perhaps you could share a couple of webpagetest.org links as a starting point.
While that’s technically fair I think it misses the point from a community growth and UX perspective. There’s a lot to be said for the amount of traffic our communities attract from search.
IMO it’s important that their first visit to that link loads quickly.
And it does - excluding asset heavy communities such as NPN I don’t really see any Discourse communities with finished load times over two seconds, and the DOMContentLoaded is typically well under 1000ms.
WebPageTest is a terrible metric, open a browser, inspect the page source and swap to the network tab, empty the cache and force a hard reload. All the numbers are right in front of you.
You’re suggesting there’s a problem, but you aren’t giving us any examples. It would be really good if you could substantiate these claims.
It’s a perfectly valid tool that can be used as a starting point, or to delve deeper into specific scenarios (OS, location, bandwidth, latency) if you wish. It’s also a convenient way to share a result from a controlled scenario for others to look at.
Network tab is also perfectly valid as long as you understand that you are seeing literally just “your” experience, likely from your desktop over whatever connection you are on. It’s a good litmus test, it takes but a few seconds and it may or may not give you what you need to optimize for your visitors.
Both have merit. Neither are a “terrible metric” at all.
@eextra also worth mentioning that you should have a response time counter visible when you’re logged into discourse as an admin. You also have the ability to log NGINX performance reports via the admin panel.
Wanted to add my 2 cents because I use them a lot and find them helpful. Especially the ones that do multiple locations and test multiple times in a run.
I find it interesting I do get different results from those sites vs my browser when it comes down to optimizations that involve 100-200ms, although they seem to be accurate for times larger than that.
Sometimes I opt for optimizations that make the website speed testing places happy because if they all are reporting similar measurements, I assume Google is also, which I could be wrong about, as it’s algorithm is closed up.