Just ran on my Note 5 and my 6+ both in Chrome. Note 5 got 302ms and 6+ got 309ms. Not sure what you did different. I will test it at the office with the drawer full of devices… But I suspect the numbers you posted are rather off on iOS
I don’t see where Jeff suggested Android manufacturers are cheaping out, they’re merely using their money to build CPUs with more cores rather than investing in new designs with better single core performance. Both approaches cost a lot of money, but one of them is fundamentally better for javascript performance since most if not all implementations of JS engines are almost entirely single threaded.
It seems the Android manufacturers are more interested in slapping n slow CPU cores on a die than they are in producing very fast CPU cores. And this is quite punishing when it comes to JavaScript.
While this is one side of the problem, the other angle is battery life. The vast majority of mobile users aren’t that concerned about javascript speed and would far rather have improved battery life than faster JS processing in their browser
I think John’s read of Jeff’s post comes from Jeff’s choice to say “slapping” rather than something like “putting”.
Slapping parts down on a chip suggests some level of carelessness… which suggests cheapness because they’re not (in this interpretation) spending money on diligent engineering effort.
(I don’t know if any of this is true or not, but I think it explains where John’s read probably came from.)
It could still be that one approach costs notably/significantly more than the other in terms of a phone manufacturer getting to choose what features their phone CPU gets. I’m thinking of the situation where Apple, Samsung, and Huawei are ARM architectural licensees, LG is an ARM core licensee (a cheaper license), and I guess companies like Sony and HTC don’t have any ARM licenses and buy chips from Qualcomm or whomever. If ARM CPU and/or SoC designs trend toward favoring multi-core count and performance, it’s seems plausible that a company would have to go a more custom and expensive route to get a chip that focuses more on single-core performance.
I get what you’re saying about how it’s just a choice of what they’re spending their money/effort on. Agreed. I wrote the paragraph above to speculate that maybe part of the money-spending decision did involve a cheapness debate of some sort.
@codinghorror Have you considered adding Tapatalk support to Discourse? I’m thinking Android users with slow phones could just use Tapatalk instead of a web browser.
The problem is you tested in Chrome which is hobbled on iOS because it doesn’t get full access to Safari’s JS engine. Re-run with testing the core browsers on each device (Chrome on Android, Safari on iOS) and you’ll see a much bigger difference.
Funny but rather relevant story, I used to use chrome on my iPhone 6, due to Apple crazy it simply does not run at the same speed as Safari, in fact we have clocked it half as fast. This happens cause you are forced to use webview.
Even at half speed it was still fairly fast and usable
But… I needed to do something about the composer for position fixed bugs in Safari, so forced myself to use it for a bit.
Once I used Safari for a few hours on the iPhone, I simply could not tolerate the artificial slowness of chrome. I simply stopped using it on my iPhone.
Stuff is mainly acceptable to users cause they know no better, once any android user experiences Discourse on a 6s, going back to android will incredibly painful
On iPhone 6 Safari, Discourse perf is excellent, it can be tolerable or fast enough for some android users, but it is never excellent and often terrible
Consider trying to identify areas of your app where performance bottlenecks could be mitigated by implementing Web Workers/Service Workers to offload processing to additional cores. You should be able to implement the improvements while maintaining a single code base.
Web Workers can do a lot of processor-intensive stuff like image and string manipulation, remote access (XMLHTTP), and math.
There is a project for Ember that integrates Ember with Web Workers, it might save you some time:
I don’t know the specific areas of your app that might benefit most from this strategy, but it seems like the ember-parallel library was written to address exactly the class of performance issues that concern you.
I built a hybrid ( Ionic / Angular ) app a while back using Discourse as an API when I was posting here more frequently. I mainly used my iPhone5C and the iOS simulator for development, but the lacking Android performance was quite apparent then.
I have since moved toward React as some developers have, and can attest to it’s incredible speed in performance / development. Paired with Redux, I really see the future of apps going this way. A simple (mostly) state container with dumb child components that can only update the parent state, which notifies connect()ed child components with affected props by performing diffs.
Potentially leverage server side component rendering with the react-rails ruby gem.
Give it time (possibly a lot) before some dust settles in the FE space , but React + Redux is quite impressive for what it’s worth.
A possible solution is to push to update the HTML spec to replace Javascript MVC frameworks with something native, so that things like Ember and Angular become unnecessary.
This proposal went viral a few months back, but I stopped working on it as I didn’t really get any positive feedback from the browser vendors for any interest in it. In fact, they were pushing for MORE Javascript, which i though was ridiculous.
Lots of developers were interested, though, and if its something we can get the browser vendors to support we can push to implement this.
Something like this can solve the Javascript speed problem within 1 year.
I’m more inclined to believe that it’s easier to optimize for ~10 CPU configurations than for 500.
It sucks that NITRO (or whatever it’s called today) isn’t available for all browsers.
Forcing me to use the slower chrome ( to get password and tab sync) is only making my perception of the device (ipad air 1) worse.
Chrome on iOS sometimes crashes when using discourse.
Has this been fixed for chrome, or do they not use the native webView? iOS 8 WebKit changes finally allow all apps to have the same performance as Safari - 9to5Mac
That being said, Discourse is somewhat inefficient. After browsing a discourse site for a few hours on desktop i sometimes end up with a tab using more than a gigabyte of ram.
Excuse the image source - it was the only image google would find for me. My ram usage often ends up higher.
Browsing “aggressively” for ~10 minutes peaked at 1.2 GB ram. but after stopping for a minute or so fell to
Which is still quite a lot…
When I run profiles for our topics page in production mode, a huge amount of cost is the bookeeping of bindings and observation.
I wonder if FastBoot architecture could allow us to adopt an architecture that gives up on a lot of our addiction to bindings and observers and instead adopt a far more React like approach towards rendering.
Just render the kaboodle, if anything in the post changes, rerender the post completely and so on. We adopted that approach on the topic list partially and it has helped us a lot.
Perhaps a first move here would be to provide a primitive for a zero-bookeeping render that we can experiment with, if all rendering is doing is converting objects to html it would be super duper fast, even on android.
I worry about the amount of contortions we will need to adopt fastboot (always on) on android especially with infinite scroll etc
Zero interest in Tapatalk – that’s a least common denominator solution. Half the functionality we offer isn’t even possible in Tapatalk, because it’s trying to render every forum software ever created in a single app.
Also having to download an app to get to a website… I think it’d be smarter to render plain 1996 HTML for slower Android devices, as mentioned several times upstream. If you happen to have something recent like a Galaxy S5, S6 or a Nexus 9, performance will be tolerable – remember we send down half the data on all Android devices.
It comes to the same thing, but as I understand it, Apple now allow other browsers to use Nitro on iOS (since iOS 8) but Chrome doesn’t do so because it’s missing a few features they rely on.
To this day I still think pure HTML & CSS provides the better UX. So I think the answer is rather obvious, render the whole view on Server before serving it to the user. Perceptual Fast Boot as some mentioned above.
Which actually causes another problem, Ruby and Rails is slow. Jeff would properly know giving his experience with running Stack Exchange at absolutely blazing fast speed. And as far as I am concern Ruby isn’t getting any dramatic performance increase in the near future.
Making server side rendering just add another burden on to the server.
So I was wondering, maybe it’s time for some nice compiled language to try doing its best? There was a great talk recently called “Make the Back-End Team Jealous: Elm in Production” by Richard Feldman. One of the things author mentions is TodoMVC Benchmark, which seems to show that Elm does a great-great job. Plus, as a Haskeller, usage of Elm in a project as popular as Discourse would make me happy, since it does have a nice type-system, and popularizing a functinoal language with a good type-system is always awesome news.
If you are currently working through API, and would like to add static-rendering, this seems like you wouldn’t need to talk to database or anything else. This might be a great opportunity to try writing a ruby extension in Rust. There was some topic about this recently on the internet, so maybe it would fit the template-rendering niche well.