Do your users ever complain about the speed of Discourse?


(Sam Saffron) #7

For the record we do plan to improve performance for our upcoming release.

On my personal goal list:

  • Reduce initial JS payload by at least half
  • Improve js render time of front page by at least 30%
  • Improve js render time of topic page by at least 50%

On @eviltrout’s list is

  • eliminate es6 bundling overhead (at least 200ms on desktop)
  • Improve general performance

Not promising all these goals will be met, but I will be quite happy if we can hit the 2 second mark on initial page render.


(Jeff Atwood) #8

The Android thing @mcwumbly mentioned is the main Achilles heel at the moment. It is really bad there, literally 3 to 5 times slower than it should be.

That has gone on for a year and I finally started pushing hard on both the Chrome Android and Ember sides to get fixes in place. (to be fair it is mostly a Chrome problem)


#9

Great to hear that performance is a priority for the project!


(Sven) #10

Are there any planned improvements on the backend as well? TTFB of the HTML doc is rather high.


(Jeff Atwood) #11

From what geographic location? Because the speed of light is a bitch.


(Sven) #12

The test above was from Switzerland.

Here some more test results just focusing on the HTML doc:

Location                                                Connect         TTFB            Total
USA, Dallas         meta.discourse.org (64.71.168.201)	0.067 secs	0.276 secs	0.537 secs
USA, Atlanta        meta.discourse.org (64.71.168.201)	0.083 secs	0.308 secs	0.635 secs
USA, New York	    meta.discourse.org (64.71.168.201)	0.092 secs	0.344 secs	0.678 secs
UK, London          meta.discourse.org (64.71.168.201)	0.132 secs	0.542 secs	1.197 secs
NL, Amsterdam       meta.discourse.org (64.71.168.201)	0.138 secs	0.603 secs	1.286 secs
Germany/Frankfurt   meta.discourse.org (64.71.168.201)	0.144 secs	0.597 secs	1.175 secs
Brazil/Sao Paulo    meta.discourse.org (64.71.168.201)	0.184 secs	0.758 secs	1.547 secs
Singapore	    meta.discourse.org (64.71.168.201)	0.185 secs	0.771 secs	1.678 secs
JP, Tokyo	    meta.discourse.org (64.71.168.201)	0.241 secs	0.629 secs	1.186 secs
Australia, Sydney   meta.discourse.org (64.71.168.201)	0.277 secs	1.130 secs	2.248 secs

(Jeff Atwood) #13

Looks like it scales quite linearly with distance. Try from a California endpoint.


(Sven) #14

Yes, true. So I guess your server is located in the bay area, which explains the latency over distance.

Are there any performance recommendation when running discourse in a docker container? I don’t see the expected result on our discourse installation (v1.5.0.beta7 +3).


(Jeff Atwood) #15

What result are you expecting? I’m not clear… the general advice for any Ruby app is, you want very high single thread CPU performance. Lots of cores is not super helpful (beyond two).


(Matt Palmer) #16

Nothing Docker-specific, but in general: get a fast (single-core performance) CPU, fast disks, lots of RAM, and a low-latency network. The CPU’s single-core performance is the limiting factor for a lot of the page generation time (page render isn’t multi-threaded, so lots of slow cores isn’t useful), while most of the rest is down to how quickly the DB can service requests, which is helped by fast disks (so data gets loaded and saved quickly) and lots of RAM (so you never have to touch those fast disks in the first place). A low-latency network means you’re not left sitting around all day waiting for the packets to get from column A to column B.


ScaleWay review?
(Sven) #17

Network latency isn’t an issue as you can see from the screenshot above. Hardware isn’t an issue either as it runs on SSD, E5-1650 v2 @ 3.50GHz and 64GB ram (the system is very idle). And other frameworks on this system are running very fast.

So my guess is that it is a settings issue of either docker itself or within the container (ruby, nginx, db, etc).

                          duration (ms)  from start (ms)   query time (ms)
Executing action: index   189.3 	 +6.0 	           48 sql 28.1

(Sven) #18

Maybe my expectations are a bit too high. I’m not saying it is slow but my expectation of the response time are <=150ms (considering a network latency of 25ms). Are the SQL queries being cached?


(Rafael dos Santos Silva) #19

I get ~105ms going from page to page (like categories view to latest view).

It is pretty fast for us.


(Sam Saffron) #20

I am constantly trying to shave off server time, this is very important for adoption cause if we need hugely powerful systems to run Discourse it hurt us.

I spent a fair amount of time in the past few months on this, I recently blogged here: Fixing Discourse performance regressions

That said, our biggest pain is not on the server at the moment (unless you have very large forums with enormous topics)

Our pain is on the client

http://www.webpagetest.org/result/151220_J2_D1R/1/details/cached/

(note we do cache, in fact that front page request was generated in 3ms server side, rest of the cost is networking SSL and so on)


(Jeff Atwood) #21

As a baseline, here’s what I get when I select this topic from the topic list (e.g. the Discourse JavaScript app is already loaded in my browser’s memory, so it just pulls the JSON data necessary to render the topic)

Here’s what I get when I press F5 to refresh the page, fully reloading the Discourse JavaScript app:

Basically around 190ms (navigating to topic within loaded JS app) → 330ms (loading whole topic and JS app from scratch). There is some variance around those numbers:

load whole page     load topic only
---------------     ---------------
315                 235
305                 196
329                 159
360                 184
330                 173
335                 190
312                 227
338                 176
329                 205

This is from California so my time to first byte is gonna be really good.


(Matt Palmer) #22

I wasn’t referring to Internet latency, necessarily; if your web appserver and DB are relatively far away from each other, the high number of DB queries will cause a larger-than-necessary delay in page rendering.


(Sven) #23

@codinghorror thanks for the details. Those numbers correlate with the numbers we are getting on our system. The results are indeed very good considering the amount of SQL queries. The only way to further decrease the TTFB is probably a disk cache solution.

@sam totally agree on prioritizing the optimization of the front page rendering.

Thanks for the excellent support and keep up the excellent work!


(Erlend Sogge Heggen) #24

Relevant discussion here:

And essentially the solution here:

https://eviltrout.com/2016/02/25/fixing-android-performance.html


(Erlend Sogge Heggen) #25

(Jeff Atwood) #26

Just comparing Dec 2015 with May 2016, same topic (this topic), load topic only (using back and forward buttons on my mouse / browser):

old    new
---    ---
235    150
196    145
159    135
184    163
173    142
190    146
227    201
176    136
205    146

old average: 194 (stdev 25), new average: 152 (stdev 20)

I am in CA about an hour from datacenter so network is negligible, this would be all client and server time.