Do your users ever complain about the speed of Discourse?


#1

Hi,

I have one popular forum running on an old-style forum software that is pretty fast. Some of my most active users constantly post and browse the site, and they get annoyed pretty quickly if there are any delays in posting and page loads.

My limited experience with Discouse via this meta forum and on my own test VPS yields decent but not stunning performance in terms of page load time. Here’s a Pingdom test of this thread:


"Your website is faster than 67% of all tested websites"

Not too bad. But my (marginally complex) Drupal sites usually end up being somewhere around 85 to 95% faster than other tested websites on Pingdom. Then again, Discourse is 100% cooler than all websites on the internet, so that’s definitely a major point in its favor. :wink:

What has your experience been? Do users every complain about the speed of your Discourse forum? Or do the incredible features of Discourse compensate for its marginal page load times?


(Spero Koulouras) #2

Not being an expert on pingdom, is it possible these results do not accurately reflect the true end user experience?

  • The pingdom results appear to include “poll” events from Discourse which happen in the background and don’t impact what is rendered.

  • This test appears to capture only the first load of a page and may not be representative of performance when actually scrolling through and reading topics and posts.

In my informal use of Discourse and other forums performance has never been the first order issue.


#3

That’s true, Pingdom doesn’t take into account performance after loading the initial page. However, if the initial page load is too slow, first-time users will simply abort and not come back. I’m not saying this is necessarily the case with Discourse, but I’d just like to make sure that page load time isn’t a widespread problem.


(Spero Koulouras) #4

Couldn’t agree more on the need for a great user experience and fast initial page load.

If I understand Pingdom and Discourse correctly, I think the page will be rendered prior to the first “poll” message, putting the true page load time under .5 second instead of at 3.2s which would dramatically impact the reported “faster than xx%” result. Check out the pic.


#5

I see, I think you’re right. Nice catch.

And has anyone received any subjective comments from normal users on the perceived speed or lack thereof of Discourse?


(Dave McClure) #6

I’ve not heard any complaints from “normal” users.

But one known issue is the performance on Android. That issue is not something that can easily be resolved by the discourse team alone, but its getting attention from folks that work on Ember and Chrome / v8, so improvements are expected longer term on that front.


(Sam Saffron) #7

For the record we do plan to improve performance for our upcoming release.

On my personal goal list:

  • Reduce initial JS payload by at least half
  • Improve js render time of front page by at least 30%
  • Improve js render time of topic page by at least 50%

On @eviltrout’s list is

  • eliminate es6 bundling overhead (at least 200ms on desktop)
  • Improve general performance

Not promising all these goals will be met, but I will be quite happy if we can hit the 2 second mark on initial page render.


(Jeff Atwood) #8

The Android thing @mcwumbly mentioned is the main Achilles heel at the moment. It is really bad there, literally 3 to 5 times slower than it should be.

That has gone on for a year and I finally started pushing hard on both the Chrome Android and Ember sides to get fixes in place. (to be fair it is mostly a Chrome problem)


#9

Great to hear that performance is a priority for the project!


(Sven) #10

Are there any planned improvements on the backend as well? TTFB of the HTML doc is rather high.


(Jeff Atwood) #11

From what geographic location? Because the speed of light is a bitch.


(Sven) #12

The test above was from Switzerland.

Here some more test results just focusing on the HTML doc:

Location                                                Connect         TTFB            Total
USA, Dallas         meta.discourse.org (64.71.168.201)	0.067 secs	0.276 secs	0.537 secs
USA, Atlanta        meta.discourse.org (64.71.168.201)	0.083 secs	0.308 secs	0.635 secs
USA, New York	    meta.discourse.org (64.71.168.201)	0.092 secs	0.344 secs	0.678 secs
UK, London          meta.discourse.org (64.71.168.201)	0.132 secs	0.542 secs	1.197 secs
NL, Amsterdam       meta.discourse.org (64.71.168.201)	0.138 secs	0.603 secs	1.286 secs
Germany/Frankfurt   meta.discourse.org (64.71.168.201)	0.144 secs	0.597 secs	1.175 secs
Brazil/Sao Paulo    meta.discourse.org (64.71.168.201)	0.184 secs	0.758 secs	1.547 secs
Singapore	    meta.discourse.org (64.71.168.201)	0.185 secs	0.771 secs	1.678 secs
JP, Tokyo	    meta.discourse.org (64.71.168.201)	0.241 secs	0.629 secs	1.186 secs
Australia, Sydney   meta.discourse.org (64.71.168.201)	0.277 secs	1.130 secs	2.248 secs

(Jeff Atwood) #13

Looks like it scales quite linearly with distance. Try from a California endpoint.


(Sven) #14

Yes, true. So I guess your server is located in the bay area, which explains the latency over distance.

Are there any performance recommendation when running discourse in a docker container? I don’t see the expected result on our discourse installation (v1.5.0.beta7 +3).


(Jeff Atwood) #15

What result are you expecting? I’m not clear… the general advice for any Ruby app is, you want very high single thread CPU performance. Lots of cores is not super helpful (beyond two).


(Matt Palmer) #16

Nothing Docker-specific, but in general: get a fast (single-core performance) CPU, fast disks, lots of RAM, and a low-latency network. The CPU’s single-core performance is the limiting factor for a lot of the page generation time (page render isn’t multi-threaded, so lots of slow cores isn’t useful), while most of the rest is down to how quickly the DB can service requests, which is helped by fast disks (so data gets loaded and saved quickly) and lots of RAM (so you never have to touch those fast disks in the first place). A low-latency network means you’re not left sitting around all day waiting for the packets to get from column A to column B.


ScaleWay review?
(Sven) #17

Network latency isn’t an issue as you can see from the screenshot above. Hardware isn’t an issue either as it runs on SSD, E5-1650 v2 @ 3.50GHz and 64GB ram (the system is very idle). And other frameworks on this system are running very fast.

So my guess is that it is a settings issue of either docker itself or within the container (ruby, nginx, db, etc).

                          duration (ms)  from start (ms)   query time (ms)
Executing action: index   189.3 	 +6.0 	           48 sql 28.1

(Sven) #18

Maybe my expectations are a bit too high. I’m not saying it is slow but my expectation of the response time are <=150ms (considering a network latency of 25ms). Are the SQL queries being cached?


(Rafael dos Santos Silva) #19

I get ~105ms going from page to page (like categories view to latest view).

It is pretty fast for us.


(Sam Saffron) #20

I am constantly trying to shave off server time, this is very important for adoption cause if we need hugely powerful systems to run Discourse it hurt us.

I spent a fair amount of time in the past few months on this, I recently blogged here: Fixing Discourse performance regressions

That said, our biggest pain is not on the server at the moment (unless you have very large forums with enormous topics)

Our pain is on the client

http://www.webpagetest.org/result/151220_J2_D1R/1/details/cached/

(note we do cache, in fact that front page request was generated in 3ms server side, rest of the cost is networking SSL and so on)