The State of JavaScript on Android in 2015 is... poor

@codinghorror A solution might be to render Discource on the server and send it to the client side, enriching it with JavaScript logic afterwards. I know @tomdale has been working on Fastboot ( But from the looks of it it is definitely not ready for production apps yet.

I just ran your suggested test on Chrome on an i7 4700MQ / 16GB twice (shutting down some windows before the second run), battery settings on “power saver” and got an average of 350ms mean performance. Switching to “Balanced” and “High Performance” modes got me in the 70-80 range.

This means that you’re saying someone with their 2014 high-end laptop on power saving mode is exhibiting 2012 iPhone performance… and that’s a problem for you. If your app is so complex, the performance of a Haswell i7 in power-saving mode isn’t going to cut it, then is it really the state of JavaScript performance in Android or is it the performance of Ember?

Maybe the iPhone 6S cores are as fast per core as a Haswell i7 without power-saving switched on (if so, Bravo for them). Or maybe the 6S’s chip/browser has been tuned for this benchmark. But if I’m on the road and my laptop in power saving mode is going to bog down on Discourse, I’m not going to blame Chrome, I’m not going to blame Intel, I’m going to blame your app.

Ember Version: 1.11.3
User Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36
|              Ember Performance Suite - Results               |
|            Name            | Speed | Error | Samples | Mean  |
| Render Complex List (HTML) |  8.55 |  37.7 |     105 | 116.9 |

My Intel® Core™ i5-3570 CPU @ 3.40GHz (Desktop Work PC) just lost to an iPhone :sadpanda:


@codinghorror what about webassembly? It’s gaining momentum and can be an useful tool to achieve some performance improvements

1 Like

I’m not much of a developer so this may be out of scope, or just plain useless, but what about Mithril? That is what flarum is using and at least on my tests with android I see much faster page loads compared to discourse or nodebb for that matter. I don’t know anything about the backend of flarum besides it is php, but I’ve seen articles such as discussing the speed benefits of the framework. Or, perhaps, more importantly, how is flarum different and getting (at least in my small personal tests) faster initial page loads and can any of that be incorporated by discourse.

Additionally, I do like the idea of at least the setting of a non-js/slow phone fallback page, or even some type of user customizable, static?, front page just to get something on the screen faster. In my discourse implementation (coming from old drupal forums) - I’ve seen such a large decrease in posts, I can’t help to think that some of this is page loading speed on android…

1 Like

We have this code in our post view:

const PostView = Discourse.GroupedView.extend(Ember.Evented, {

  classNameBindings: ['needsModeratorClass:moderator:regular',

Ember has this feature where it allows you to set class names for the DOM element its rendering.

In Android we render 10 posts, everywhere else 20.

This little snippet of code looks innocuous and is following the best practices.

It is ultra convenient, if I add the .deleted property on post I know that magically, somehow, Ember will go ahead and fix up the class name so you have <div class='deleted'>

The problem though is that this feature has a cost.

_applyClassNameBindings: function () {
      window.counter1 = window.counter1 || 0;
      var start =;
      var classBindings = this.classNameBindings;

      if (!classBindings || !classBindings.length) {

      var classNames = this.classNames;
      var elem, newClass, dasherizedClass;

      // Loop through all of the configured bindings. These will be either
      // property names ('isUrgent') or property paths relative to the view
      // ('content.isUrgent')
      enumerable_utils.forEach(classBindings, function (binding) {

        var boundBinding;
        if (utils.isStream(binding)) {
          boundBinding = binding;
        } else {
          boundBinding = class_name_binding.streamifyClassNameBinding(this, binding, "_view.");

        // Variable in which the old class value is saved. The observer function
        // closes over this variable, so it knows which string to remove when
        // the property changes.
        var oldClass;

        // Set up an observer on the context. If the property changes, toggle the
        // class name.
        var observer = this._wrapAsScheduled(function () {
          // Get the current value of the property
          elem = this.$();
          newClass =;

          // If we had previously added a class to the element, remove it.
          if (oldClass) {
            // Also remove from classNames so that if the view gets rerendered,
            // the class doesn't get added back to the DOM.

          // If necessary, add a new class. Make sure we keep track of it so
          // it can be removed in the future.
          if (newClass) {
            oldClass = newClass;
          } else {
            oldClass = null;

        // Get the class name for the property at its current value
        dasherizedClass =;

        if (dasherizedClass) {
          // Ensure that it gets into the classNames array
          // so it is displayed when we render.
          enumerable_utils.addObject(classNames, dasherizedClass);

          // Save a reference to the class name so we can remove it
          // if the observer fires. Remember that this variable has
          // been closed over by the observer.
          oldClass = dasherizedClass;

        utils.subscribe(boundBinding, observer, this);
        // Remove className so when the view is rerendered,
        // the className is added based on binding reevaluation"willClearRender", function () {
          if (oldClass) {
            oldClass = null;
      }, this);

      window.counter1 += - start;

On my i7 4770K Win 10 box, the cost of this convenience amortized across displaying 20 posts runs from 5ms to 15ms

On my Nexus 7 the same feature costs 35ms to 50ms. Considering it is only rendering half the posts the cost of the convenience on Nexus 7 is 14x slower than my desktop for this particular abstraction.

This is just the upfront cost, then go and add hidden teardown cost, GC cost and so on which on Android is severe. A lot of the slowness in Android is actually due to teardown of previous page (on desktop we usually beat the XHR but this is almost never the case on Android)

This specific example is just an illustration of the endemic issue. In React and some other frameworks you just render and don’t carry around the luggage of “binding”, instead you just rerender when needed. This approach means that you don’t need bookeeping, the bookeeping is the feature that is killing android perf.

I am confident Chrome will be able to catch up to Safari and bridge the 40% performance gap we are currently observing. Having Android be 40% faster will be awesome, and we don’t even need to do anything but wait to get that.

However, all we can hope for is a 40-60% perf gain. Considering it takes seconds to render a topic on a Nexus 7 and sometimes even 10 seconds to render a complex page like the categories page, I am not convinced this is quite enough.

Thing is, we know when a post is deleted (we hit delete) … a post becomes a whisper (never) … turns into a wiki (wiki button clicked) and so on. We can afford some extra code for these cases, especially for huge perf gains. We really only care about 2 pages when it comes to optimising heavily (topic/topic list)

The only approach that will be fast enough for Android devices in the next 2-3 years is going to be unbound renders with triggered refreshes for our topic and topic list page.

Ember can probably get 20% faster, Chrome will probably get 40% faster, but a React style unbound render is going to be an order of magnitude faster, it’s a totally different ballpark.

I am confident we can keep one codebase and be maintainable and fast enough, we just need to work closely with @tomdale and @wycats to unlock some of this unbound goodness in Ember (which is already unlocked quite a bit for Fast Boot) and we can make Discourse fast even on Android.


Indeed, and React lends itself well to using webworkers: Flux inside Web Workers - Narendra Sisodiya - Medium

So that is a sort of solution, albeit one not everyone can implement…

1 Like

Have you considered server side rendering of components ? That’s about the best performance gain you can receive on mobile devices. Reducing client side work is the key, keep ajax requests to a minimum, force page refreshing and keep logic to a server, fetch all results before the client is attached.

I will cite an old blog post by twitter from 2012, and the year today is 2015 - you are experiencing the same issues as twitter was 3 years ago:

This is the famous “time to first Tweet” benchmark which proves that JavaScript has become something it was not meant to be.


This is a really interesting thread, ultimately summarized by this.

we need to start considering alternatives for the Discourse project

Client-side heavy apps are at odds with the restrictions and capabilities of low powered mobile devices. Filament groups “Performance Impact of Popular JavaScript MVC Frameworks” was written a year ago but is still as relevant today, perhaps more so given the proliferation of client-side mvc apps.

We need to look at mobile web apps in their own right with resource restrictions and different capabilities to desktop or we’ll be having this same discussion in ten years.

1 Like

Shut it down! We’ve been sprung!

It’s not so much a matter of “mastery” as a matter of tradeoffs. How much slower would development be if we were hand-rolling all our JS to get to “DOS emulator” grade performance, and thus how many less features would Discourse have?


The larger point is that the iPhone 6s is about as fast at JavaScript as the average 2 year old laptop or desktop. Do you think mobile devices will have less memory, less cpu power over time?

I am worried about $99 Android devices in developing countries that have to pay the 500% JavaScript performance tax most of all.


Do you think mobile devices will have less memory, less cpu power over time?

Of course not, but it’s clear that these frameworks aren’t designed for the current crop of mobile devices. Given that’s the case if you want to ship something that works well on them you need to target them specifically and not continue to hope that things improve.

The frameworks themselves aren’t likely to get smaller or less complex over time, they will probably use more resources in the future as the devices become more capable making a lot of that progress moot.

I tend to agree with the others in the thread suggesting that a ‘lite’ version of discourse, with server-side rendering and javascript designed specifically for mobile could be a good addition, I realise that’s easier said than done. Like you wrote in the original post, I too am doubtful that the Ember apps of 2017 will perform well on intentionally low end devices that Google are trying to get in the hands of the next billion smart phone users.


Mobile devices will get slower over time.

They already have, you’re putting your latest smartphone down and worrying about $99 Android phones, aren’t you? How about $50 tablets this Christmas? $25 phones next spring?

Population outside First World is increasingly at your doorstep. So it’s not so much about tadeoff between features and speed, it’s a tradeoff between features and the future. The future is in cheap commodity devices.

1 Like

I love Discourse for forums, but my website is the worst possible use case for this. A catalog of free Android games without IAP. Most users being cheap Android phone users. Downvote is another hard point for this. I am testing something lighter, and keep an eye on Discourse anyway.

I will share my analytics results and or details for this test with the developers if they PM me. Its the worst case possible with Android.

If there isn’t js heavy websites phones wont improve.

My opinion is that since Discourse is paying the performance price they should continue and make the best possible use of javascript. Make the javascript worth their weight.

An alternative simple render is the obvious workaround, they could encourage the opensource world to contribute to the codebase to help with the budget/resource limitations. Perhaps putting it in the roadmap and dedicating a small effort just to show the way, or whatever works better.

No. Computers always get faster (and smaller) over time. Feel free to look up graphs of the last 50 years if you don’t believe me.

Hell, look up a graph of mobile performance over the last five years if you don’t believe me.

You can certainly wonder how fast that speed will trickle down to $99, $50, $10, $5, $1 devices… but that it will happen has never once been in question by any educated person familiar with the history of computing.

The real issue is the arbitrary, utterly platform specific 500% performance penalty large JS frameworks pay on Android, feel free to review the entirety of this topic for specifics. Just scroll up and read.

1 Like

Your phone is getting faster, mine does, everybody’s does in fact.

But low-end devices enter the game much much faster than technology’s advancing.

So on average your readers’ phones do indeed get slower.

It’s like during baby boom years the mean age went down. Everybody’s been aging still, but the mean age went down, you see?


Hi Sam,

@wycats is doing really interesting work on first render performance (re-rendering is already blazing fast thanks to Glimmer). We also have issues with mobile (where CPUs are slower), and think this will help a great deal.

You’ve probably seen it already, but if not there is a video here:


I think the point here is that whatever is in the toppest-of-the-line Android devices today will be in thousands or millions of low end Android devices in a year or two. Phones may have gotten faster in general, but the bulk of users will still be on slow devices. And the OS itself is probably not going to be optimized for this “obsolete”-ish hardware by then.

1 Like

Do you think your hardware intuition might actually be prejudiced by Engadget-enlightened, SSD-boosted experiences you enjoy day to day?

It took Google/Motorola deal to jerk the low-end market into its current state from the Gingerbread age, and know what? It’s still 2013 now and will remain 2013 through the next year unless another black swan event happens.