The State of JavaScript on Android in 2015 is... poor

For several years now we’ve tracked the fact that, over time from 2012 onward, Android JavaScript performance has become wildly divergent from iOS JavaScript performance. And not in a good way.

To give you an idea of how divergent it has become, try:

This is the benchmark most representative of Discourse performance, and the absolute best known Android score for this benchmark is right at ~400ms on a Samsung Galaxy S6. That doesn’t seem too bad until you compare…

  • iPhone 5 → 340ms
  • iPhone 5s → 175ms
  • iPhone 6 → 140ms
  • iPad Air 2 → 120ms
  • iPhone 6s → 60-70ms

In a nutshell, the fastest known Android device available today – and there are millions of Android devices much slower than that out there – performs 5× slower than a new iPhone 6s, and a little worse than a 2012 era iPhone 5 in Ember. How depressing.

We’ve done enough research to know this issue is not really specific to Ember, but also affects Angular and most other heavy/complex JavaScript on Android. Why?

Part of it is indeed Chrome/V8 JavaScript optimization issues on Android as you can see from this AnandTech Galaxy S6 review. Note the browser used:

It’s also partly because single core performance on Android is falling way, way behind iOS. Notice that the flagship Android device barely has the single core grunt of an old iPad Mini based on the old A7 core. Compare single core Android GeekBench versus single core iOS GeekBench:

It seems the Android manufacturers are more interested in slapping n slow CPU cores on a die than they are in producing very fast CPU cores. And this is quite punishing when it comes to JavaScript.

This is becoming more and more of a systemic problem in the Android ecosystem, one that will not go away in the next few years, and it may affect the future of Discourse, since we bet heavily on near-desktop JavaScript performance on mobile devices. That is clearly happening on iOS but it is quite disastrously the opposite on Android.

I am no longer optimistic this will change in the next two years, and there are untold millions of slow Android devices out there, so we need to start considering alternatives for the Discourse project.

:warning: Update

We did make significant changes to the Discourse project to work around this issue. See Robin’s blog post for details.

39 Likes

Have you tried with Firefox for Android instead of Chrome? :wink:

3 Likes

Try it yourself – always slower in my testing.

6 Likes

So what are you considering or what ideas are you throwing around internally? Functional features with Javascript disabled? A dedicated app hitting the Discourse API?

1 Like

Hard to say, some of it would possibly be under “let’s present a better minimal rendered experience for ancient devices”. Dedicated native Android app is far beyond our current staff and budget and resource levels. Plus, two code bases, two sets of bugs, two sets of support, etc.

One mitigation I should have mentioned – we already send down half the content to Android devices and have for a year now, so we effectively cut that 5× Android JavaScript performance penalty in half. Which means it is still 2.5× slower :frowning: but more tolerable.

9 Likes

This is getting really frustrating for me personally, not just from a developer standpoint but from a consumer standpoint. I love my phone (Moto X 2014) and much prefer Android to iOS but as JS apps are becoming more ubiquitous we’re almost at the point where a (personally) inferior experience on the rest of the device is worth the trade off to have the sites I use and love work well.

We have a discourse forum for our sports site (thesportscollective.co) and many of our writers have been complaining about poor performance of the forum on their older android phones. I don’t WANT to move away from discourse, but if Android performance doesn’t start to improve we may have to since 68% of our active users are on Android devices. At this moment in time, I would support discourse moving away from Ember, even though I think the functionality Ember provides is pretty core to the Discourse experience. It’s just not worth the performance trade off in my mind at this time on such a large segment of our users devices.

3 Likes

But the wouldn’t the “app” just be a browser that worked? Samsung seem to be able to do it, so it is Chrome not being efficient, or is it really that hardware hobbled? This has been an issue for such a long time! How is Google getting away with it, even for Angular!

1 Like

Reminder, Angular also has this same performance problem on Android. It’s not specific to Ember. Any complex JavaScript app will exhibit the same problems.

So it’s really about throwing away the JavaScript part of the app, which is challenging since Discourse is a JavaScript app. How do you throw away what you are? You can’t rip out your own skeleton.

Anyway, I agree, the 2013 me said “surely the state of JavaScript performance will improve on Android by 2016”. Sorry 2013 me, things didn’t turn out that way :cry:

8 Likes

I guess the solution will be for Google (and/or other manufacturers) to license/imitate/copy/reproduce/invent something akin to Apple AX chips.

I think you should change the title to The sad State…

1 Like

Yeah, I understand that. I meant “ditch ember” as in ditch JS app in general. Like you said, it’s a complex issue though. Ditching Ember would be a MAJOR change for discourse and one that, in some ways, takes away a lot of what makes discourse unique to other solutions. But with the ever growing android market share it might be necessary. Maybe “byte code for the web” will save us but who knows when, or even if, that will be supported enough to make a difference.

The other strategy is to double down on the coattails of Apple, since the company is clearly firing on all cylinders. But as @sam has pointed out, Apple does not play the low cost game, and probably never will.

This would mean Discourse is positioning itself as more of a “premium” solution (read: devices that run JavaScript at near desktop speeds) than an “everyperson” solution (read: zillions of cheap Android devices with 2012 era specs), given the worldwide share of iOS and Apple.

Not super happy with that, but it might be the only viable way forward.

1 Like

Does Discourse do some server side rendering? Does (would) this help?

1 Like

As an android user that would certainly bum me out, but if that’s the best decision for discourse I suppose I can’t encourage otherwise. I’m not familiar with any websites using that strategy though. It’s one thing when company’s make iOS only apps, there’s clear segmentation there, but having a website that just runs like garbage on one platform? I have a feeling users will tend to, as they often do, blame the service first.

It just means over time you’ll lose Android users as they get fed up with the huge speed disparity (if they care, or notice) but you’ll retain and grow iOS users.

If Apple’s overall market share keeps increasing, this wouldn’t necessarily be a bad strategy. Not my favorite, and not really in harmony with the original vision for Discourse, but I’m limited in what we can do with the resources that we have. We can’t build two distinct applications (web, for iOS, and native, for Android) without destroying the company in the process.

It could also be that over a long time scale (e.g. five years out) Android will fix this. But it clearly will not be fixed in a year or two.

1 Like

I would argue that actually they do, with their older devices. If you’re comparing on processor speed & ability — the case could be made that Apple’s phones are actually cheaper (3 year old iPhone 5 vs brand new Samsung Galaxy S6).

1 Like

I thought android was still seeing gains in market share though? or is that only on a global scale and US is declining?

It might help to have an HTML mode that was designed to work well on older phones.

3 Likes

According to figures released [Feb 2015] by Kantar Worldpanel ComTech, iOS accounted for 47.7 percent of all smartphones sold in December 2014 in the U.S. That’s just a notch above Android’s 47.6 percent.

The larger phones helped them a lot. Of course I’ve been saying that for years… I’d expect Apple to dominate over time in the US, particularly given how amazing the latest devices have been. They might do well in Europe too, but I’m not sure.

For the rest of the world, Android will probably win due to low cost.

1 Like

I don’t really know anything about this, but what about creating an Android app? (Of course I’m guessing it would run faster than Discourse in a browser.)

I love Discourse and the direction it’s been going in. It would be a shame to change.

Maintaining separate code bases for web, android, and iOS would be a nightmare and generally speaking web traffic is growing faster than app traffic.

With low cost, comes even lower performance. :wink:

That was covered upthread:

1 Like

Not always. Ironically the fact that high end Android phones tend to be focused on more cores means their single core performance usually isn’t terribly higher than the (good) low cost phones. You just have 2-4 shitty cores instead of 8

1 Like

Discourse is not a JavaScript app. Sure, that’s what it is built with, but that’s not what it is. Wikipedia is not a “PHP app” any more than Facebook was (and isn’t really anymore).

No site owner chooses Discourse because it is built with JavaScript and Ember and whatever else you use.

They choose Discourse because it’s a forum that tries to solve the real problems that regular forums have. And for the most part, it’s an excellent solution that has definitely changed how people think about forums. We don’t sell saddles here, afterall.

The technology stack doesn’t matter. The user experience (on all devices) does. So make a hybrid solution. Render things server side instead of on the client, so you can instantly send down the entire cached page sans the user icon in the menu, and then load that in afterwards (along with anything else). Once the initial page load is over it seems like Android performance isn’t nearly as bad. I’m not an expert on how things work under the hood, so it’s just my uninformed two cents.


Personally I’ve always thought a true official Discourse mobile app, where users can subscribe to any Discourse forum and get notifications, offline sync, and significantly better compose options, would be a much better solution for actually interacting on a mobile device.

You could also give forum owners the option to use ads in stream, which would give people a real option for monetizing their forum without being too intrusive. And you could provide a Discourse forum directory to help with discovery, or even source really interesting discussion topics across the web, which would be fascinating.

Yeah, it’ll cost money to create the apps, but it would probably be worth it in the long run. And everything works against the Discourse API so you’re just building a front-end.

But even if you have the mobile app it won’t change the fact that initial page load performance on Android isn’t a good experience.

11 Likes

All well and good, but

3 Likes

Ultimately one platform would always lag behind (most likely the android app) and it wouldn’t be a better experience, just a different kind of issue. If individual sites want to use the discourse API to make their own apps thats one thing, but I’m not sure having a global discourse app is really that great of an idea even if the manpower and resources existed.

2 Likes

I remember Twitter going through the server side rendering process and reading about Ember having started “Fastboot”…

Did Ember’s Fastboot implementation make it into the current Discourse version?

Did it help? Would it help at all?

1 Like

fastboot is no where near close to production ready at this stage, if you look at the repo on github they haven’t even gotten it serving JS assets yet. In addition to that, fast boot would only help with initial page load as it’s only designed to render the initial page and let the front end app take over from their. Fast boot is definitely something I’d like to see in discourse eventually, but it’s nowhere near ready at this stage, and wouldn’t solve the overall issue of android performance being subpar

1 Like

At least Chrome supports the push api. Push notifications are an essential feature of a mobile app. With iOS you may get near native app performance, but not a near native app experience.

One of the things that I think worth mentioning, is that most users will never have experienced the other side of the OS divide. By this, I mean if they’ve always used Android (like myself), they will be used to the (relatively) slow rendering of JS heavy sites like discourse. If they’ve always used iOS, then they’re used to the (relatively) fast rendering.

In other words, don’t compare Apples to Oranges. Most users won’t. Most users will compare their new shiny orange to their old, mouldy orange.

That’s not to get away from the fact that the performance is significantly worse on Android. It is an issue, and something to keep considering from a development point of view. I’d assume that with server side rendering of pages (or similar), a technique is available to work around this. But the end users of the product probably (I’d say won’t, but I don’t have any facts to back it up) don’t care. I already find the performance of discourse incredibly better on mobile than any other forum product I’ve used :slight_smile: Keep the good UX and people will keep using it is my guess.

8 Likes

that would be true if all websites were JS apps, but they’re going to be comparing Discourse against other sites that are not JS apps and any discourse site is going to feel like a bad experience in comparison. Just like the average use won’t know how much better JS sites work on Apple, they won’t know “Oh, this is a JS site, and therefore I should excuse it’s poor experience because all JS sites are like this.” the average user just knows that this site doesn’t feel as snappy as these other sites and it’s a poor experience over all.

6 Likes

So, granted, FastBoot is not yet production-ready. But it is headed in that direction, with several companies working on getting it to that state and planning on using it sooner rather than later.

I wonder:

  1. Would FastBoot help with this problem?
  2. Might it actually be better to always force Android devices into a perpetual-FastBoot mode? E.g., never loading the JavaScript and always hitting the server for a different page? It seems a shame, given how much cool stuff–like Service Worker—is available on Chrome for Android. But it’s hard to argue with the awful performance.
13 Likes
  1. Not out of the box even when it’s done. This isn’t a problem fast boot is fundamentally meant to solve.
  2. We don’t know what Fast Boot will end up looking like beyond a general product outline, so my instinct would be to say no it would be more trouble than it’s worth. You’re talking about using fastboot to essentially force an ember app to be a static site based on the device that’s visiting the site. That sounds messy on it’s face, and what if Android DOES get it’s act together some day? Now you have to force older devices to the static site, but newer ones to the Ember app.

Discourse isn’t the fastest thing ever on my devices ( Moto X 2014, Galaxy Tab S 2014 Custom ROM) but is fast enough.

What I mean? It loads faster than Facebook.com, so everyone is used to wait a bit on first load. After that everything is smooth if you are using a reasonably recent device.

Maybe a react native/java app can squeeze most performance? Yeah but is it a show stopper or just a plus?

4 Likes

Has anyone benchmarked Discourse on the Android M preview?

We’ve done enough research to know this issue is not really specific to Ember, but also affects Angular and most other heavy/complex JavaScript on Android.

Could you reveal some more details of this research? Which JS frameworks were tested? Whis one is the fastest? Which one is the slowest? Are there any numbers avaialble?
Thanks!

1 Like

This is becoming more and more of a systemic problem in the Android ecosystem, one that will not go away in the next few years, and it may affect the future of Discourse, since we bet heavily on near-desktop JavaScript performance on mobile devices. That is clearly happening on iOS but it is quite disastrously the opposite on Android.

A large portion of the world uses low-end Android devices, which is why apps like Facebook Lite exist (which has gained hundreds of thousands of users in just a few months). If you want to have wide reach to as many users as possible, you’re going to eventually need some sort of lighter version that uses server-side rendering rather than client-side.

5 Likes

John Gruber suggests:

I don’t think it’s because Android manufacturers are cheaping out, as Atwood implies. I think it’s because Bob Mansfield’s team inside Apple is that far ahead of the rest of the industry.

1 Like

I’m a JavaScript Developer myself, and I understand the problem with attempting to get JavaScript heavy sites working smoothly on Android devices.

However, I don’t think it’s as large of a problem as you make it sound. I think Android devices, while it does seem like Qualcomm is pushing more cores, have a bright future with JS. There is also a lot of work being done on JavaScript engines and cpu architectures to utilize multiple cores in single threaded processes like the event loop. The Spidermonkey code monkeys are already making some progress in this area.

I don’t know the workings of Discourse, nor do I have any experience with ember. I think you should try building a very thin client that works with mostly static pages, maybe with some JS for menus, and using paginated posts. Use CSS transitions whenever possible. There’s really not much else I can recommend without getting a good look at what you’re doing already. (I have plenty on my plate anyhow)

3 Likes

Hi Jeff,
long time fan
Have you guys looked at React; You can push the same code client side as well as render it on the back end (isomorphic javascript);

I would go with figuring out what is the heaviest piece of perf code and see whether i can recreate it with react and remeasure performance.
The new android react native would have been an interesting option if you guys had an option to rewrite

1 Like

Just ran on my Note 5 and my 6+ both in Chrome. Note 5 got 302ms and 6+ got 309ms. Not sure what you did different. I will test it at the office with the drawer full of devices… But I suspect the numbers you posted are rather off on iOS

Edit: ran on my iPad Air 2 in Chrome… 266.80ms

1 Like

I don’t see where Jeff suggested Android manufacturers are cheaping out, they’re merely using their money to build CPUs with more cores rather than investing in new designs with better single core performance. Both approaches cost a lot of money, but one of them is fundamentally better for javascript performance since most if not all implementations of JS engines are almost entirely single threaded.

It seems the Android manufacturers are more interested in slapping n slow CPU cores on a die than they are in producing very fast CPU cores. And this is quite punishing when it comes to JavaScript.

While this is one side of the problem, the other angle is battery life. The vast majority of mobile users aren’t that concerned about javascript speed and would far rather have improved battery life than faster JS processing in their browser

1 Like

I think John’s read of Jeff’s post comes from Jeff’s choice to say “slapping” rather than something like “putting”.

Slapping parts down on a chip suggests some level of carelessness… which suggests cheapness because they’re not (in this interpretation) spending money on diligent engineering effort.

(I don’t know if any of this is true or not, but I think it explains where John’s read probably came from.)

It could still be that one approach costs notably/significantly more than the other in terms of a phone manufacturer getting to choose what features their phone CPU gets. I’m thinking of the situation where Apple, Samsung, and Huawei are ARM architectural licensees, LG is an ARM core licensee (a cheaper license), and I guess companies like Sony and HTC don’t have any ARM licenses and buy chips from Qualcomm or whomever. If ARM CPU and/or SoC designs trend toward favoring multi-core count and performance, it’s seems plausible that a company would have to go a more custom and expensive route to get a chip that focuses more on single-core performance.

I get what you’re saying about how it’s just a choice of what they’re spending their money/effort on. Agreed. I wrote the paragraph above to speculate that maybe part of the money-spending decision did involve a cheapness debate of some sort.

1 Like

@codinghorror Have you considered adding Tapatalk support to Discourse? I’m thinking Android users with slow phones could just use Tapatalk instead of a web browser.

1 Like

The problem is you tested in Chrome which is hobbled on iOS because it doesn’t get full access to Safari’s JS engine. Re-run with testing the core browsers on each device (Chrome on Android, Safari on iOS) and you’ll see a much bigger difference.

6 Likes

Funny but rather relevant story, I used to use chrome on my iPhone 6, due to Apple crazy it simply does not run at the same speed as Safari, in fact we have clocked it half as fast. This happens cause you are forced to use webview.

Even at half speed it was still fairly fast and usable

But… I needed to do something about the composer for position fixed bugs in Safari, so forced myself to use it for a bit.

Once I used Safari for a few hours on the iPhone, I simply could not tolerate the artificial slowness of chrome. I simply stopped using it on my iPhone.

Stuff is mainly acceptable to users cause they know no better, once any android user experiences Discourse on a 6s, going back to android will incredibly painful

On iPhone 6 Safari, Discourse perf is excellent, it can be tolerable or fast enough for some android users, but it is never excellent and often terrible

… Composed from my iPhone 6

8 Likes

Consider trying to identify areas of your app where performance bottlenecks could be mitigated by implementing Web Workers/Service Workers to offload processing to additional cores. You should be able to implement the improvements while maintaining a single code base.

Web Workers can do a lot of processor-intensive stuff like image and string manipulation, remote access (XMLHTTP), and math.

There is a project for Ember that integrates Ember with Web Workers, it might save you some time:

https://github.com/bengillies/ember-parallel

I don’t know the specific areas of your app that might benefit most from this strategy, but it seems like the ember-parallel library was written to address exactly the class of performance issues that concern you.

4 Likes

I built a hybrid ( Ionic / Angular ) app a while back using Discourse as an API when I was posting here more frequently. I mainly used my iPhone5C and the iOS simulator for development, but the lacking Android performance was quite apparent then.

I have since moved toward React as some developers have, and can attest to it’s incredible speed in performance / development. Paired with Redux, I really see the future of apps going this way. A simple (mostly) state container with dumb child components that can only update the parent state, which notifies connect()ed child components with affected props by performing diffs.

Potentially leverage server side component rendering with the react-rails ruby gem.

Give it time (possibly a lot) before some dust settles in the FE space , but React + Redux is quite impressive for what it’s worth.

A possible solution is to push to update the HTML spec to replace Javascript MVC frameworks with something native, so that things like Ember and Angular become unnecessary.

Check out the proposal at: https://www.github.com/mozumder/html6

This proposal went viral a few months back, but I stopped working on it as I didn’t really get any positive feedback from the browser vendors for any interest in it. In fact, they were pushing for MORE Javascript, which i though was ridiculous.

Lots of developers were interested, though, and if its something we can get the browser vendors to support we can push to implement this.

Something like this can solve the Javascript speed problem within 1 year.

I’m more inclined to believe that it’s easier to optimize for ~10 CPU configurations than for 500.

It sucks that NITRO (or whatever it’s called today) isn’t available for all browsers.
Forcing me to use the slower chrome ( to get password and tab sync) is only making my perception of the device (ipad air 1) worse.
Chrome on iOS sometimes crashes when using discourse.
Has this been fixed for chrome, or do they not use the native webView? iOS 8 WebKit changes finally allow all apps to have the same performance as Safari - 9to5Mac

That being said, Discourse is somewhat inefficient. After browsing a discourse site for a few hours on desktop i sometimes end up with a tab using more than a gigabyte of ram.

Excuse the image source - it was the only image google would find for me. My ram usage often ends up higher.

Browsing “aggressively” for ~10 minutes peaked at 1.2 GB ram. but after stopping for a minute or so fell to
Which is still quite a lot…

1 Like

When I run profiles for our topics page in production mode, a huge amount of cost is the bookeeping of bindings and observation.

I wonder if FastBoot architecture could allow us to adopt an architecture that gives up on a lot of our addiction to bindings and observers and instead adopt a far more React like approach towards rendering.

Just render the kaboodle, if anything in the post changes, rerender the post completely and so on. We adopted that approach on the topic list partially and it has helped us a lot.

Perhaps a first move here would be to provide a primitive for a zero-bookeeping render that we can experiment with, if all rendering is doing is converting objects to html it would be super duper fast, even on android.

I worry about the amount of contortions we will need to adopt fastboot (always on) on android especially with infinite scroll etc

6 Likes

This would be an excellent solution IMO. The client app becomes someone else’s problem, and it’s available on all platforms.

A plugin was even started 2 years ago and basic functionality was working

https://meta.discourse.org/t/tapatalk-api-implementation-project/10023

Seems to me that Core’s hostility levels might be a little lower now?

Zero interest in Tapatalk – that’s a least common denominator solution. Half the functionality we offer isn’t even possible in Tapatalk, because it’s trying to render every forum software ever created in a single app.

Also having to download an app to get to a website… I think it’d be smarter to render plain 1996 HTML for slower Android devices, as mentioned several times upstream. If you happen to have something recent like a Galaxy S5, S6 or a Nexus 9, performance will be tolerable – remember we send down half the data on all Android devices.

6 Likes

Fair enough. I suppose “shortcut to half-way” gets beaten by a longer route to the destination.

It comes to the same thing, but as I understand it, Apple now allow other browsers to use Nitro on iOS (since iOS 8) but Chrome doesn’t do so because it’s missing a few features they rely on.

7 Likes

To this day I still think pure HTML & CSS provides the better UX. So I think the answer is rather obvious, render the whole view on Server before serving it to the user. Perceptual Fast Boot as some mentioned above.

Which actually causes another problem, Ruby and Rails is slow. Jeff would properly know giving his experience with running Stack Exchange at absolutely blazing fast speed. And as far as I am concern Ruby isn’t getting any dramatic performance increase in the near future.

Making server side rendering just add another burden on to the server.

1 Like

Yes, Ruby isn’t blazingly fast, but it’s a lot easier to throw hardware at the problem on the server than it is on the client…

So I was wondering, maybe it’s time for some nice compiled language to try doing its best? There was a great talk recently called “Make the Back-End Team Jealous: Elm in Production” by Richard Feldman. One of the things author mentions is TodoMVC Benchmark, which seems to show that Elm does a great-great job. Plus, as a Haskeller, usage of Elm in a project as popular as Discourse would make me happy, since it does have a nice type-system, and popularizing a functinoal language with a good type-system is always awesome news.

Hope this is not too much off-topic. Cheers!

If you are currently working through API, and would like to add static-rendering, this seems like you wouldn’t need to talk to database or anything else. This might be a great opportunity to try writing a ruby extension in Rust. There was some topic about this recently on the internet, so maybe it would fit the template-rendering niche well.

And with Perfect Browser and with Mobile Safari, on my Ipad Air. Android devices being slow is just half the story, the other half is that mobile devices have limited RAM and don’t page to disk.

Discourse is one of those web apps I have learned to avoid on my ipad because if I have too many tabs open, trying to load a forum that uses Discourse will crash my browser. That doesn’t happen on any other forums or commenting systems that I encounter on the web, just with Discourse.

1 Like

Which iPad? If it is a 2010 iPad 1 with 256mb RAM, I’d expect crashes on lots of modern websites. If it is a 2012/2013 iPad 3 with 1GB RAM (or newer) you should be fine.

Modern iOS devices now finally have 2GB RAM (iPad Air 2, iPhone 6s) but I’ve never had any problems at all using Discourse for hours at a time on an iOS device with 1GB RAM.

Most modern-ish Android devices have 3GB or 2GB RAM. It does not help them much in the performance department, but it can help with paging out to disk.

For what it’s worth, I’m getting about 290ms on my OnePlus Two, so things aren’t quite as dire as the atrocious S6 might imply.

That’s encouraging. Slightly better than iPhone 5 – in which version of Chrome? Android did improve a bit in Chrome 44, 45, 46 in this area.

Tested on a fresh Chrome 45 install. big cores at work I suppose. :slight_smile:

I use Discourse almost exclusively on my Nexus 5. I find it quite usable. The performance seems fine to me, but maybe I don’t know any better. My main frustrations are with things not working quite right on mobile, e.g. quote reply.

I think it’s great that you’re taking performance so seriously though.

Try latest, or meta, I worked around a horrible bug 2 days ago

Though even post this change quoting is awkward I think a dedicated quoting mode would be a huge help on mobile

5 Likes

I’m very much in favour of this. A “Discourse Lite” isn’t just a workaround; it also broadens your userbase. The no-js and legacy phone users are a shrinking minority, but we’re still talking about millions of users here. It’s with good reason that Facebook offers a no-js interface for users like these.

(Anyone who wants to test this can go to Facebook with JavaScript disabled. They’ll give you the option to go to https://m.facebook.com. Will just redirect to desktop version if JS is on.)

This strategy can also be worked on in unison with “sticking to your guns” and powering onwards with Ember. I still have some faint hope in JavaScript on mobile. Looking at how prevalent Android is becoming as a platform, expanding from mobile phones to watches, laptops and TVs, and knowing how important an open web is to Google’s business, I can’t help but think things will get better. The same goes for Mozilla with their Firefox OS and “official support” for Ember.js as a top notch framework. I guess the bigger question is “when”. I find some solace in the fact that WebView is updated on my Android tablet almost daily.

Regarding native apps (rant-ish)

Most of this has been covered above already, but I felt the need to collect my own thoughts about it, and I might as well make my musings public, however limited my point of view is.

The caveats of native apps are considerable:

Native apps don’t promote rainbows

Discourse forums are only just starting to differentiate themselves with unique fonts and colors, the baby steps of what will hopefully be a flourishing theme marketplace in the more distant future. Pushing native apps would hinder this trend, because why bother making your forum a unique-looking snowflake when a large percentage of your users are accessing it via a homogenized app anyways.

Discourse Plugins become very difficult to support

Niche plugins with functionality that falls outside of the scope of the native app would be rendered unusable, so there’d be less incentive to take Discourse off the beaten path with whacky experiments, which can be hugely detrimental to ongoing innovation.

Higher maintenance is still a factor

A framework like React does significantly lower the maintenance load (at least on paper - I’m no developer), but that native app you’re putting out is still a completely different beast with its own unique points of failure. Sure, developers gotta develop for different browsers already anyhow, but at least any effort in that area still contributes to WWW convergence in a sense, whereas developing for different native platforms is doing favours for no one besides the respective platforms.

Conclusion: The native app strategy works far better for centralised platforms

If you’re Facebook, React makes a ton of sense. Everything is under your control, you have an established native app story out there already and the room for customisation of a user’s own space is very limited and predictable.

Could the equivalent of a Reddit BaconReader for Discourse forums still be a breakout success? Absolutely. But I’d rather leave that risk&reward free for the taking by another talented bunch of individuals, just not the core Discourse team.

15 Likes

Oooo, it works!!! Thanks!!! :smiley:

1 Like

I’m not sure what speed my Galaxay S4 gets, but I honestly hardly notice any real slowness, sure the initial load isn’t instant, but it is like I have to set my phone down and wait either. If you go the route of defining “older mobiles as non-js”, I would like a way to force it into JS mode, as most of the time, I visit via mobile out of necessity, not desire, and part of the necessity is being able to reply or act on a give topic/post.

This is mostly due to the lower acceptance of JavaScript and JS Frameworks in the Japanese - Korean hemisphere. In fact they are going to have first Angular conference this year : http://ngjapan.org/ . I guess this just influences the management and engineering team over there to not to concentrate on optimizing JavaScript. JavaScript scene is very weak in Eastern Hemisphere.

As Jeff shows, its not a problem of android, but a problem of the JavaScript language not supporting proper multicore support.

Isn’t a JS website app also a “lower common denominator solution” when compared to native apps?

If the issue is that android devices improvement focus on multithreading, using web workers could help.

me? ipad air 1 as stated above.

[quote=“sam, post:68, topic:33889”]
Try latest, or meta, I worked around a horrible bug 2 days ago
[/quote] it’s still weird on the ipad.
I can select, but the iOS menu “copy/select/…” pops up above/on top of the quote reply button.

Point of clarification: we know the performance gap comes down to the software, not the hardware. Chrome on desktop is much slower than Safari on desktop, on the same hardware. (People just care less about the gap on desktop, because both are “fast enough”.)

After much deep-diving into V8 internals, I am convinced it is because V8’s optimizing JIT compiler is way too aggressive. It constantly attempts optimizations that turn out to be bad and need to be backed out. Basically, V8 gambles hard on optimizations: if they turn out to be right, it is blazingly fast. If they turn out to be wrong, it’s worse than not trying to optimize at all.

This is why you will often see “(compiled code)” as the number 1 thing getting garbage collected in an Ember app on Chrome. V8 is constantly compiling functions, discovering they are overspecialized, and throwing them away.

10 Likes

You are actually describing a Microsoft tactic. As far as the user is concerned, the site IS at fault. It’s just that some of us know it’s an Android hardware deficiency. Let’s not tell them, since then they would only go to iOS like everyone else with a clue in this game. Once they get in iOS and realize all the lies they’ve been told about it are wrong, they will be just like a Mac user moving from Windows–they will NEVER give up the Apple gear. LOL

1 Like

@codinghorror What browsers did you run the benchmarks for the Apple devices on? I’m on a iPhone 6 Plus and I can’t run the benchmarks on safari. It just won’t start. On chrome 45, I got the following results:

Hi,

just saw the tweet, I thought I could share my experience when it comes to chrome on android. Just to show how far I went I’m making this html5 midi controller DJ Crontab: Building a HTML5 Control Surface for Ableton Live , part 1 ; didn’t have the time to document the javascript part yet. I acknowledge my problem is of a different nature: it’s not content, I’m not targetting low-end devices at all, I don’t care about compatibility.

My first attempt was using emberjs, and this was ridiculously slow. Weak spots I could find:

  • using dirty checks instead of object.observe. This makes ember’s store really slow.
  • using css classes is slow, especially used at top-level
  • jquery events on a too broad scope
  • other things I can’t remember now, it’s in my todo list of stuff to document

Then I migrated to aurelia.io which uses latest chrome capabilities, and a lot of custom code to allow fast repaint ( object.observe, canvas, webworkers, directly handle events myself instead of wrapping it with jquery, etc. ), and then I could get something that can be used for the purpose of realtime control.

But then again I don’t have any compatibility constraint and I don’t need content helpers. aurelia is very light on features and documentation so far, but proves javascript can be faster when using latest features, which are available on any android devices ( in contrast to, say, someone stuck with IE8 ). But, since you want to make a bold move due to android’s sluggishness, you should consider dropping ember. The way it’s designed, I’m not sure they will implement latest javascript features anytime soon.

So, it’s hard choices, I’m not in your shoes, but I’m pretty sure a dedicated android application is not the best investment, and that you should try harder to either fix or start from scratch the web application with a better foundation.

Ok… So we are comparing performance of JavaScript on two dissimilar devices… In two different browsers (article does not states this btw). What is the control here? Is this like comparing a propeller plane and a fighter jets taxi speeds? I feel like the real test that needs to be done is to compile nodejs and run the tests for both devices. I am getting this uneasy cheating feeling from safari… And before you say “they would never”… I point out VW. As someone who has used Safari on iOS (who hasnt) I would not classify it as a blown away fast experience. In no way is it 4x as fast as chrome on Android like these numbers suggest. Something is off here. I would expect them to be similar… Not off by multiples.

3 Likes

ember doesn’t seem load in safari - i had the same problem.

1 Like

I guess my question is: how big of a deal is this to actual Android users?

You’re comparing raw iOS numbers to Android numbers, and sure, they can be a little jarring.

But Android users aren’t sitting there holding an iPhone in one hand and their own phone in another. In other words, instead of comparing Android numbers to iOS numbers, maybe try comparing “Discourse on Android” numbers to “Other sites on Android” numbers. And not just numbers, but usability, etc.

I have a Galaxy Nexus, and only-okay internet usage comes with the territory of having an old device. Most websites are pretty slow. But I don’t also have an iPhone, so in a way I don’t know any better.

Discourse might be 5 times slower on Android compared to iOS, but it’s about the same speed (?) as other websites on the same device, and I think that’s the comparison most people actually care about. Apples and oranges, and all that.

I won’t say you’re making a mountain out of a molehill, but I do think maybe the people who live on the mountain don’t really mind that it takes them a little longer to get to the nearest Chipotle. Instead of worrying that people in the city can get to a Chipotle in 10 minutes instead of the hour it takes the mountain people, compare the time it takes the mountain people to get to the nearest McDonalds, because that’s the only comparison that matters to the mountain people.

Android users as mountain folk and Discourse as a Chipotle. I think that metaphor got away from me.

Edit: Whoops, I guess this is pretty much what @benfreke said. So, uh, what he said.

5 Likes

Just scored 236ms on my Note 5 using Next Browser. Looks like hardware isn’t the most important factor here.

Qualcomm Snapdragon series, Samsung Exynos series, I suppose Nvidia Tegra and standard ARM for the companies without an architecture license. That’s not that many, unless I misunderstood or I’m missing something. I just don’t think Google is trying all that hard to get Chrome high and tight. They seem to be depending on yearly faster hardware covering up the issues until they are fast enough for people to not complain.

with configurations i was referring to the fact that apple has a7,a7x a8,a8x i think and a few combinations with an extra core. But their designs are VERY homogeneous .

The situation with android (cpus):
Mediatek
Qualcomm
Nvidia
Intel (x86)
Samsung
rockchip

They vary from 1 to 8 cores. Some designs are 2 big 2 small, some are 4 big, some are 4 small (ish).
That’s spanning several archictetures and bit sizes. They presumably require different optimizations - and google will have a hell of a hard time optimizing for all of them.
That’s not even getting into screen sizes, ram amount, storage speeds etc…
There are Several orders of magnitude more android configurations than there are iOS configurations.

I wonder what the benchmark looks like running in Intel’s Crosswalk. When I was attempting to deploy IonicFramework apps, it greatly improved the performance and consistency of our app. It made scrolling almost bearable.

Is that really true though? Most sites load a ridiculous amount of stuff, and a lot of it seems to rely on javascript. If javascript were turned off, the core content on the site might still load, but I don’t think most users disable javascript.

Therefore, it seems that benfrenke’s point still stands, with the qualification that the relevant benchmark for Discourse on android shouldn’t be Discourse on iOS, it’s a representative basket of websites on Android, many of which probably suffer with excess javascript “enhancements”

That’s a bit of backwards reasoning. With slow cores, and numerous ones, vs few high performing cores, you expect the same performance. Numerous slow cores will always be slower here. Just the work to switch from core to core, the way these processors are designed, wastes too many cycles.

Then there is the problem of four faster, more sophisticated cores, with four slower, less sophisticated cores. The processors were not designed for all of them to be working on the same problem at once. It’s a fundamental error in usage.

It is a combination of software and hardware. Based on the geekbench score, a galaxy s6 should be able to do close to as well as an iPhone 5s which has very similar geekbench single core score. You can see this in the AnandTech results I screenshotted when comparing the native browser and Chrome.

@tgxworld the benchmark runs fine in my very stock iPhone 6s under Mobile Safari. Maybe hard refresh?

1 Like

I’m not all that sure it matters. How long before phones get 10x faster? How long before they’re 100x faster? Just limit your expectations, find new ways to make things seem more responsive. You are talking about devices a hundred, if not a thousand times more performant than the ones we’ve been happily playing Doom and Quake on. They might not be as quick as you’d like, but they are not slow.

1 Like

Slow is a relative term, so yeah compared to the PCs that were around when Doom and Quake came out no phones are not slow but that comparison is a fallacy. The experience on an android phone is fundamentally slow compared to other platforms, noticeably in fact.

Surely designing and validating the CPU cores is meaningfully more expensive (and time consuming) than working with existing ARM IP?

Fewer faster cores is better for most single-user apps and OSs, not just JS engines, and when its not, GPUs are often a better option. So, the question then is why is Android in this sorry state, and what might the future hold?

How Did We Get Into This Mess?

I think that this is a result of an interplay between ARM’s business strategy, the strategy of SoC makers, and the marketing needs of Android phone vendors.

Android phone vendors are already at a disadvantage when competing with Apple because most lack the scale and integration of Apple, and therefore can’t control as much of the user experience. This situation lends itself to competing on specs.

Core-count and clockspeed are both numbers that can be flogged by phone makers against Apple, and against one another. Unfortunately, pushing clockspeed too far comes at the expense of an even more important spec: battery life.

The SoC makers need to deliver “improvements” at a suitable pace for the phone makers. Because of there are multiple players in the android market, a suitable pace is multiple times a year. Unfortunately for the SoC makers there are forces outside their control that limit the pace of meaningful improvements. As a result, they have resorted to gimmicks, like large core counts.

The forces outside the control of SoC makers are manifold. Perhaps the most fundamental is the pace of improvements in semiconductor fabrication technology. Moores “law” has had a good long run, and over that run, the economic and technological forces involved have ended up synchronizing around delivering new process nodes approximately every two years. In between, the main opportunity for performance improvements comes from architectural changes.

This brings us to the other force outside the control of most ARM SoC makers: They rely on ARM for CPU core designs. Qualcomm and NVIDIA are major exceptions. Both have their own CPU cores, but even so, they still reply on ARM for the CPU cores in many of their SoCs. For Qualcomm, this is apparently the result of having misjudged how aggressively Apple would pursue it’s own core designs. This strategic misstep left them without any high-end 64-bit offering for a time, and resulted in them turning to an ARM core design while they finish their own in-house 64-bit ARM core. For NVidia, it seems to be more the slow pace of their own program to develop in-house ARM cores.

Developing their own ARM cores is an expensive and long-running commitment. They have to acquire the appropriate license from ARM, hiring suitable staff, and commit to the long process of developing and validating an in-house CPU core design. The challenge of hiring should not be underestimated. It is pretty clear that the number of people who can be trusted in key roles in a program to develop a cutting-edge CPU core are quite limited; just witness the way certain key people have shifted around from Apple, AMD, etc.

I don’t think that people can expect ARM itself to develop a competitive high-performance core, because that undermines their relationship with architectural licensees, like Apple, Qualcomm and NVidia. The risk of alienating Apple is particularly high, I think, because Apple, which has core design, SoC design, device design, OS, developer tools, and application distribution under its control, can move to another instruction set architecture if need be. As a result, the core designs that can be licensed from ARM are likely to fall short of being leading edge.

What Might the Future Hold

So, given all this, what are the prospects for Javascript performance on Android, and what does it mean for Discourse?

It seems that, in the short term, things aren’t likely to get significantly better. My guess is that javascript engines aren’t suddenly going to get lots better on Android but maybe there are some low-hanging fruit for real-world apps (rather than just optimizations that look good on benchmarks). I don’t have a good sense for the uptake of upgrades in the Android world, but assuming that meaningful improvements for JS performance on Android are ready to go in 6 months, it is going to be at least another 6 months before they are widely deployed.

There will be new Android devices released, with new SoCs. There is no guarantee that they will have a marked improvement in single core performance, but it seems likely that, at some point soon, more cores will no longer be viable or acceptable, and that raw single-core performance will gradually improve. It will be years though before there are meaningful improvements in the raw single-core performance of the average Android device in the field.

In the longer term, there is also some chance that raw single performance in new devices makes a significant leap at some point when/if Qualcomm gets back in the swing of doing its own cores.

In the longer term, there is another possibility for Android devices with significantly improved single-core performance, non-ARM SoCs. I haven’t been keeping track of Intel’s efforts, but I know their mobile chips have good single-threaded performance, and I think they probably need to get a good foothold mobile market, or die trying. And then there is Imagination, which licenses the GPU cores used by Apple and other high-end SoC makers, and who recently bought MIPS.

One thing worth considering, that I don’t have a great sense of is the impact of demographic trends. If we assume that globally over, say, the next 4 years, Android will hold or gain market share against iOS because of cost considerations, what does that mean for the performance of mainstream android devices? Will most of those low-cost devices use old, lame SoCs and therefore lag the progress I predict/hope must come to Android single core performance, or will they at least see the same pace of performance improvements, even if they start from a relative disadvantage.

tl,dr, What Does this Mean for Discourse

As for Discourse, it seems to me that your assumptions should be that:

  • JS performance on mainstream Android devices will always lag that in iOS mainstream by a significant degree for at least the next 3 years.
  • JS performance in the android mainstream will gradually improve in absolute terms
  • JS performance in the android mainstream may improve relative to iOS in the 1-3 year timeframe.

Recommendations based on the above:

  • Don’t move away from Javascript
  • Inform end-users of the problems in order to create incentives for browser and hardware makers to address the problem
  • Based on usage stats, figure out the minimum current hardware/software you want to target.
  • Identify and prioritize application-level performance problems that have the greatest impact on user experience for users in your target
  • Fix those worth fixing.
  • When in doubt, target performance improvements that yield gains on both android and iOS, followed by those that can be tuned based on platform / performance, etc (for example, the way you currently halve the amount of data sent to android clients)
  • When possible, address issues by encouraging/supporting efforts to address them in libraries/frameworks you use (ie Ember).
  • Repeat.
8 Likes

I don’t think that the number of different devices is that big a deal as far as browsers go. x86 isn’t new, and ARM is ARM outside of the manufacturers doing their own designs (based on ARM still). If you sell cases, yes, the number of different Android devices would annoy you if you expect to cover every device. Ram amount shouldn’t make a difference, speed of storage shouldn’t make a difference, outside of some contrived and unlikely edge scenarios like a 3rd rate AOSP manufacturer breaking libraries and using janky memory or something with a 2-in low-res 16x8 display. Screen sizes have not been an issue unless you are a particularly stupid developer using defined individually pixel-mapped stuff for your universal Android app rather than following all the guidance and stubbornly using Apple techniques from the pre-iPad, pre-iPhone 5 days and expecting them to carry-over for some reason.

For all the benchmarks that have Apple abso-slaying everything ever, in practice their devices aren’t particularly faster in a very noticeable way compared to same-gen Android. In many cases, their animations make things take longer than they need to and they use image placeholders to cover-up slowness. I feel that there is a lot more to the story.

1 Like

we’re not talking about overall OS performance so the slowness of their animations doesn’t matter (especially since Lollipop introduced many animations for the same purpose). Android is slower at processing javascript than the Nitro Engine on iOS. That’s the only part we care about for this thread, even if the underlying issue is the CPU

1 Like

Where are the tangible real-world differences between Android and iOS devices wrt javascript outside of benchmarks? If it only takes fractions of or an entire second or two longer to render a page or something, what difference is being 5X slower in a bench if it is inconsequential enough that if needed it can be covered up by a swirly animation or two?

This is definitely what the Google folks reported on the bug that was filed on V8:

I updated v8:3664 with a prototype fix, which reduces number of deoptimizations caused by weak references. To see noticeable performance improvement we need to also fix v8:3663, otherwise there are many deoptimizations caused by high degree of polymorphism. Not clearing ICs would allow generated code to reach the slow generic case faster and thus avoid deoptimizations. This however is a big project (several months) as it requires support for weak references in polymorphic ICs to avoid memory leaks.

They made some incremental improvements in Chrome 44 and beyond, which definitely helped, as you can see from my Chrome benchmarks on a Nexus 9:

Chrome 44 (beta)
Render Complex List | 2.78 | 1421 | 34 | 359.85

Chrome 45 (dev)
Render Complex List | 3.32 | 1545.03 | 40 | 301.65

Also confirmed the Nexus 9 sees consistent 300ms on render complex list, and as you can note from the Geekbench single core results, the Nvidia stuff inside the Nexus 9 is (still!) the fastest Android CPU on the planet. So there is a relationship between absolute single core speed and performance in JavaScript.

About a 20% improvement, which is not chopped liver, but when you are starting in a 500% hole… I don’t mean to fault the Google folks, they have done some good work in this area, but we need further improvements.

4 Likes

Was just reading this Anandtech review of the iPhone 6s… the numbers are crazy.

1 Like

You can also see in that graph that iOS 9 alone boosts JavaScript speed substantially on the same hardware. Compare iPhone 6 (iOS 9) with iPhone 6 Plus (iOS 8). Virtually the same hardware, yet big performance improvement in the upgraded browser.

How I wish Android worked the same way…

4 Likes

I cannot repro this. On my Nexus 7 (2013) I get 974ms in Chrome 46 beta, and 1067ms in Next Browser. Are you sure you are running the benchmark exactly as specified in the first post of this topic?

Your numbers are quite suspect since they show the Note 5 as faster than the Nexus 9 which does 300ms in this test, and is also the fastest geekbench android processor according to Geekbench single core results.

Would a framework or app architecture that made better use of Web Workers be feasible for Discourse? Is there a suitable Worker-driven design that would address the major performance concerns experienced here?

I realise this introduces new problems (concurrency, marshalling and transferring data between Workers, etc), but is it conceivable that such an approach could make better use of multiple CPUs and reduce the dependancy upon single-core performance?

1 Like

Using workers doesn’t solve anything here. You’re still hitting all the deopts while rendering the HTML code.

5 Likes

I ran the benchmarks on my phone, a OnePlus 2 (all screenshots here) and it beat the “absolute best known Android score”. Those were pretty bold claims, since I doubt this to be even close to the “absolute best” android phones out there. Anyway, just one observation: Chrome stable is slow, and I can’t even exaggerate how slow it is. I ran the benchmarks more than a couple of times per browser, and perhaps 10 times on Chrome stable: it crashed twice (after taking the screenshots, it crashed about 4 more times), and when it completed it ranged from 1000ms to 2000ms.

Chrome Latest Stable: 1105ms, it’s embarrassing… it almost always crashed on my phone (only completed the test 4 times out of almost 10).

Chrome Beta: 286ms This was the second fastest browser I tried, although on one run it went up to 450ms. 286ms was the fastest, the rest also being all under 320ms, with just one run going up to 450ms.

APUS Browser: 281ms This was the fastest, and its slowest score was actually 299 so I was surprised.

Firefox (stable): 345ms I was actually expecting it to be much worse, judging by @codinghorror’s comment

Firefox (beta): 375ms was not too bad, considering.

Next Browser: 311ms I had never used this browser, and only ran the tests on it since @Codemobile mentioned it. If it scored 236, perhaps the rest might also perform better on the Note5.

Dolphin: 593ms meh.

I guess Chrome Beta for now is a better option for those of us that want Chrome…

7 Likes

“How long before phones get 10x faster? How long before they’re 100x faster?”

If desktop CPU performance is any indication[1] 10x single core performance should take one or two decades. And 100x 3 to 4 decades. btw, 64x for desktop is around the one-atom-transistor scale. So anyone’s guess if we can actually reach that.

[1] Past 3 years about 13% YoY. Intel thinks it can maybe do this the next two years also. And hopes it has its next hat trick (EUV) ready after that. EUV was supposed to be operational in 2014.

Data: A Look Back at Single-Threaded CPU Performance (7-'15) - Album on Imgur

One thing I noticed on your benchmarks and the test runs I did on Android is the extremely high percentage error. Perhaps something is not right here?

Also noticed it, pretty worrying when compared to the Safari mobile screenshot @codinghorror uploaded with ~18% error. On the desktop I get ~30%/80ms on Chrome, and ~3%/52ms on Safari (and ~300%/90ms on Firefox) so definitely Safari is doing something well.

1 Like

beating a dead horse at this point, but aside from creating more maintenance… if you’re looking to be utilized globally apps aren’t a good idea

via Opera

6 Likes

For people who see a white page on the benchmark site on alternative browsers (say, Midori, or Ubuntu’s “Browser”). It needs Local Storage enabled.

3 Likes

I got 239ms on the Next Browser with my Note 5. I think this all has much more to do with the browser and not the processor. I also feel like this test is not really a reflection of real life performance. This is the reason benchmarks need to be done on real life scenarios… Like using a 3d game to benchmark a graphics card. Everyone keeps telling me it is because I am using Chrome on iOS… But guess what… Safari is a horrible browser and does a very poor job of displaying Webpages. Why in the world would I care what kind of benchmarks it gets? The article does not state what browser a as used… I actually assumed it would be chrome because it would be very odd to compare tests again each other when every variable was changed.

I walked past a developer a few days ago trying to use aws sns in safari on osx… It looked like someone had shaken his monitor and everything just settled in random places… He was so used to websites not rendering correctly he didn’t even notice. I told him to open it in Chrome… It rendered perfect. Safari is the new IE… Only it has always been just as bad. Though I get this isn’t a discussion of what is a good browser… But you can’t strap a jet engine to a motorcycle and say “see, this apple jet motorcycle is way faster than the Google Commuter Van is we put them on the salt flats and set them in a straight line”

3 Likes

Hmmm… You know Chrome @ iOS is a skin* above Safari, because Apple doesn’t allow other web browsers?

*Skin with a lot of cool stuff, like sync.

5 Likes

Got 242ms on Chrome beta. The error thing is interesting… And I will look into it. I am genuinely curious what the real benchmarks are for these devices. I use both iOS and Android every day… Don’t mind either… And have never noticed either slow with JavaScript.

Ok… After some digging I have learned that Apple actually uses ember-js for some of their core sites… Thus it is likely they have some very unfair optimizations in place for this test. I propose we find some other tests that more accurately reflect Chrome on Android vs Safari on iOS. I will go first…

Google’s old JavaScript performance test site:

Note 5 Chrome: 9141
Note 5 Chrome beta: 8444
IPhone 6+ on Safari: 9758
iPhone 6+ and Chrome: 515

What? Can’t blame that slowness on the CPU for ios chrome… More likely because of Apples restrictions on development in ios. Sadness.

1 Like

Can GPU power be used in JavaScript? Could it be possible? Many android devices have great GPU.

I’m reading this to see if it’s possible.

1 Like

@codemobile, you’re linking to a deprecated benchmark. Google Octane is the newer and correct version that was shown in the screenshots in the very first post in this topic…

@eyko thanks for benchmarking all the various “other” browsers on Android (Firefox, etc), as you can see, they’re all slower. Chrome Beta will usually give the best results. (Also I think there is something deeply wrong with your “Chrome Stable” install, I’ve never seen it crash for me on Nexus 9 or Nexus 7 in many times of running this benchmark.)

286ms in is certainly in the ballpark for a just-released Android device with the latest CPUs – for example, the Nexus 9 has the highest Geekbench Android single core CPU score and consistently achieves 300ms. But it was released in November 2014, so not exactly new.

It is certainly possible that with further V8 optimizations the Nexus 9, or a very new Android device released in 2016, could achieve 150ms in this benchmark – that’d put it roughly on par with the iPhone 6. I do wish and hope that can happen in 2016.

I have yet to see anything close to that to date, though.

2 Likes

Ember and Angular appear to be quite a bit more resource intensive and are just plain bigger, code-wise than several alternatives. This becomes especially noticeable on lower-end mobile devices and slower connections.

Ember load times being nearly 5 times slower than React and Ampersand the following link points to contains a useful graph of some performance testing for comparison: https://github.com/facebook/react/issues/4974#issuecomment-144721245

Obviously that’s load times, versus benchmarking runtime as you’ve done here. But load time is certainly an indicator of resource use and has drastic effects on perceived performance.

React, Ampersand, and Backbone all perform very well in comparison with Ember.

The filament group has also done research here: Researching the Performance costs of JavaScript MVC Frameworks | Filament Group, Inc. (see spreadsheet linked from their post for all the data).

Of particular note is their summary which I’ve pasted below:


Results
The data is available for inspection, but the following stood out to us:

  • Ember averages about 5 seconds on 3G on a Nexus 5 and about 3 seconds 3G in desktop Chrome.
  • Angular averages about 4 seconds on 3G on a Nexus 5 and about 3 seconds 3G in desktop Chrome.
  • Backbone appears to be the only framework with workable baseline performance on all connections.
  • Looking at the difference between the Nexus 5 and the Chrome desktop over 3G suggests that the execution time required to get the application on screen plays a significant roll in the overall render start performance for Angular and Ember.
5 Likes

I don’t know what a good Octane score is, but I achieved a three-test average score of 7303 on my Moto X PE 2015

Safari 9 seems to have improved (okay, just 5ms on my laptop, but
consistently) so perhaps it’s just down to browser’s JavaScript engines and
their optimisations and not CPU architectures (are there any similar tests
with other frameworks?)

@codinghorror Single core performance is a problem with any JavaScript application on any platform. Considering the target browser support for Discourse, I’d argue that single core performance shouldn’t even be taken into consideration.

Run the same type of tests using Angular 2.0, you’ll see what I mean.

I’m obviously doing something wrong, I am comparing iPhone 6s+ 128GB with an Android. The Android seems to be twice as fast?

Baseline Render
iPhone 6s+ = 155.39s
Android = 77.49

Baseline: Handlebars
iPhone 6s+ = 156.29
Android = 80.64

My iPhone 6S gives me an average 3-run score of 16198.

It may be slowER than iOS, but it’s fast enough to have a decent user experience if one puts one’s mind to it.

A forum site is not supposed to be CPU-bound (unless the monetisation relies on quiet bitcoin mining, huh?)

Perhaps the framework is too heavy, and perhaps the memory footprint needs looking at.

Ultimately, you can drive the JS memory footprint to zero by rendering incoming data and immediately throwing away everything but the generated DOM (keep ids in DOM and retrieve those whenever you need to talk to server).

People got x86/DOS emulators running in bare Chrome at a decent speed, so it’s more a matter of engineering mastery than bad platform.

1 Like

@codinghorror A solution might be to render Discource on the server and send it to the client side, enriching it with JavaScript logic afterwards. I know @tomdale has been working on Fastboot (https://github.com/tildeio/ember-cli-fastboot). But from the looks of it it is definitely not ready for production apps yet.

I just ran your suggested test on Chrome on an i7 4700MQ / 16GB twice (shutting down some windows before the second run), battery settings on “power saver” and got an average of 350ms mean performance. Switching to “Balanced” and “High Performance” modes got me in the 70-80 range.

This means that you’re saying someone with their 2014 high-end laptop on power saving mode is exhibiting 2012 iPhone performance… and that’s a problem for you. If your app is so complex, the performance of a Haswell i7 in power-saving mode isn’t going to cut it, then is it really the state of JavaScript performance in Android or is it the performance of Ember?

Maybe the iPhone 6S cores are as fast per core as a Haswell i7 without power-saving switched on (if so, Bravo for them). Or maybe the 6S’s chip/browser has been tuned for this benchmark. But if I’m on the road and my laptop in power saving mode is going to bog down on Discourse, I’m not going to blame Chrome, I’m not going to blame Intel, I’m going to blame your app.

13 Likes
Ember Version: 1.11.3
User Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36
.--------------------------------------------------------------.
|              Ember Performance Suite - Results               |
|--------------------------------------------------------------|
|            Name            | Speed | Error | Samples | Mean  |
|----------------------------|-------|-------|---------|-------|
| Render Complex List (HTML) |  8.55 |  37.7 |     105 | 116.9 |
'--------------------------------------------------------------'

My Intel(R) Core™ i5-3570 CPU @ 3.40GHz (Desktop Work PC) just lost to an iPhone :sadpanda:

7 Likes

@codinghorror what about webassembly? It’s gaining momentum and can be an useful tool to achieve some performance improvements

1 Like

I’m not much of a developer so this may be out of scope, or just plain useless, but what about Mithril? That is what flarum is using and at least on my tests with android I see much faster page loads compared to discourse or nodebb for that matter. I don’t know anything about the backend of flarum besides it is php, but I’ve seen articles such as https://medium.com/@l1ambda/mithril-vs-angular-vs-react-d0d659c24bae discussing the speed benefits of the framework. Or, perhaps, more importantly, how is flarum different and getting (at least in my small personal tests) faster initial page loads and can any of that be incorporated by discourse.

Additionally, I do like the idea of at least the setting of a non-js/slow phone fallback page, or even some type of user customizable, static?, front page just to get something on the screen faster. In my discourse implementation (coming from old drupal forums) - I’ve seen such a large decrease in posts, I can’t help to think that some of this is page loading speed on android…

1 Like

We have this code in our post view:

const PostView = Discourse.GroupedView.extend(Ember.Evented, {

  classNameBindings: ['needsModeratorClass:moderator:regular',
                      'selected',
                      'post.hidden:post-hidden',
                      'post.deleted:deleted',
                      'post.topicOwner:topic-owner',
                      'groupNameClass',
                      'post.wiki:wiki',
                      'whisper'],
...

Ember has this feature where it allows you to set class names for the DOM element its rendering.

In Android we render 10 posts, everywhere else 20.

This little snippet of code looks innocuous and is following the best practices.

It is ultra convenient, if I add the .deleted property on post I know that magically, somehow, Ember will go ahead and fix up the class name so you have <div class='deleted'>

The problem though is that this feature has a cost.

_applyClassNameBindings: function () {
      window.counter1 = window.counter1 || 0;
      var start = window.performance.now();
      var classBindings = this.classNameBindings;

      if (!classBindings || !classBindings.length) {
        return;
      }

      var classNames = this.classNames;
      var elem, newClass, dasherizedClass;

      // Loop through all of the configured bindings. These will be either
      // property names ('isUrgent') or property paths relative to the view
      // ('content.isUrgent')
      enumerable_utils.forEach(classBindings, function (binding) {

        var boundBinding;
        if (utils.isStream(binding)) {
          boundBinding = binding;
        } else {
          boundBinding = class_name_binding.streamifyClassNameBinding(this, binding, "_view.");
        }

        // Variable in which the old class value is saved. The observer function
        // closes over this variable, so it knows which string to remove when
        // the property changes.
        var oldClass;

        // Set up an observer on the context. If the property changes, toggle the
        // class name.
        var observer = this._wrapAsScheduled(function () {
          // Get the current value of the property
          elem = this.$();
          newClass = utils.read(boundBinding);

          // If we had previously added a class to the element, remove it.
          if (oldClass) {
            elem.removeClass(oldClass);
            // Also remove from classNames so that if the view gets rerendered,
            // the class doesn't get added back to the DOM.
            classNames.removeObject(oldClass);
          }

          // If necessary, add a new class. Make sure we keep track of it so
          // it can be removed in the future.
          if (newClass) {
            elem.addClass(newClass);
            oldClass = newClass;
          } else {
            oldClass = null;
          }
        });

        // Get the class name for the property at its current value
        dasherizedClass = utils.read(boundBinding);

        if (dasherizedClass) {
          // Ensure that it gets into the classNames array
          // so it is displayed when we render.
          enumerable_utils.addObject(classNames, dasherizedClass);

          // Save a reference to the class name so we can remove it
          // if the observer fires. Remember that this variable has
          // been closed over by the observer.
          oldClass = dasherizedClass;
        }

        utils.subscribe(boundBinding, observer, this);
        // Remove className so when the view is rerendered,
        // the className is added based on binding reevaluation
        this.one("willClearRender", function () {
          if (oldClass) {
            classNames.removeObject(oldClass);
            oldClass = null;
          }
        });
      }, this);

      window.counter1 += window.performance.now() - start;
    }
  });

On my i7 4770K Win 10 box, the cost of this convenience amortized across displaying 20 posts runs from 5ms to 15ms

On my Nexus 7 the same feature costs 35ms to 50ms. Considering it is only rendering half the posts the cost of the convenience on Nexus 7 is 14x slower than my desktop for this particular abstraction.

This is just the upfront cost, then go and add hidden teardown cost, GC cost and so on which on Android is severe. A lot of the slowness in Android is actually due to teardown of previous page (on desktop we usually beat the XHR but this is almost never the case on Android)

This specific example is just an illustration of the endemic issue. In React and some other frameworks you just render and don’t carry around the luggage of “binding”, instead you just rerender when needed. This approach means that you don’t need bookeeping, the bookeeping is the feature that is killing android perf.

I am confident Chrome will be able to catch up to Safari and bridge the 40% performance gap we are currently observing. Having Android be 40% faster will be awesome, and we don’t even need to do anything but wait to get that.

However, all we can hope for is a 40-60% perf gain. Considering it takes seconds to render a topic on a Nexus 7 and sometimes even 10 seconds to render a complex page like the categories page, I am not convinced this is quite enough.

Thing is, we know when a post is deleted (we hit delete) … a post becomes a whisper (never) … turns into a wiki (wiki button clicked) and so on. We can afford some extra code for these cases, especially for huge perf gains. We really only care about 2 pages when it comes to optimising heavily (topic/topic list)

The only approach that will be fast enough for Android devices in the next 2-3 years is going to be unbound renders with triggered refreshes for our topic and topic list page.

Ember can probably get 20% faster, Chrome will probably get 40% faster, but a React style unbound render is going to be an order of magnitude faster, it’s a totally different ballpark.

I am confident we can keep one codebase and be maintainable and fast enough, we just need to work closely with @tomdale and @wycats to unlock some of this unbound goodness in Ember (which is already unlocked quite a bit for Fast Boot) and we can make Discourse fast even on Android.

19 Likes

Indeed, and React lends itself well to using webworkers: Flux inside Web Workers - Narendra Sisodiya - Medium

So that is a sort of solution, albeit one not everyone can implement…

1 Like

Have you considered server side rendering of components ? That’s about the best performance gain you can receive on mobile devices. Reducing client side work is the key, keep ajax requests to a minimum, force page refreshing and keep logic to a server, fetch all results before the client is attached.

I will cite an old blog post by twitter from 2012, and the year today is 2015 - you are experiencing the same issues as twitter was 3 years ago:

This is the famous “time to first Tweet” benchmark which proves that JavaScript has become something it was not meant to be.

3 Likes

This is a really interesting thread, ultimately summarized by this.

we need to start considering alternatives for the Discourse project

Client-side heavy apps are at odds with the restrictions and capabilities of low powered mobile devices. Filament groups “Performance Impact of Popular JavaScript MVC Frameworks” was written a year ago but is still as relevant today, perhaps more so given the proliferation of client-side mvc apps.

We need to look at mobile web apps in their own right with resource restrictions and different capabilities to desktop or we’ll be having this same discussion in ten years.

1 Like

Shut it down! We’ve been sprung!

It’s not so much a matter of “mastery” as a matter of tradeoffs. How much slower would development be if we were hand-rolling all our JS to get to “DOS emulator” grade performance, and thus how many less features would Discourse have?

3 Likes

The larger point is that the iPhone 6s is about as fast at JavaScript as the average 2 year old laptop or desktop. Do you think mobile devices will have less memory, less cpu power over time?

I am worried about $99 Android devices in developing countries that have to pay the 500% JavaScript performance tax most of all.

6 Likes

Do you think mobile devices will have less memory, less cpu power over time?

Of course not, but it’s clear that these frameworks aren’t designed for the current crop of mobile devices. Given that’s the case if you want to ship something that works well on them you need to target them specifically and not continue to hope that things improve.

The frameworks themselves aren’t likely to get smaller or less complex over time, they will probably use more resources in the future as the devices become more capable making a lot of that progress moot.

I tend to agree with the others in the thread suggesting that a ‘lite’ version of discourse, with server-side rendering and javascript designed specifically for mobile could be a good addition, I realise that’s easier said than done. Like you wrote in the original post, I too am doubtful that the Ember apps of 2017 will perform well on intentionally low end devices that Google are trying to get in the hands of the next billion smart phone users.

Wrong!

Mobile devices will get slower over time.

They already have, you’re putting your latest smartphone down and worrying about $99 Android phones, aren’t you? How about $50 tablets this Christmas? $25 phones next spring?

Population outside First World is increasingly at your doorstep. So it’s not so much about tadeoff between features and speed, it’s a tradeoff between features and the future. The future is in cheap commodity devices.

1 Like

I love Discourse for forums, but my website is the worst possible use case for this. A catalog of free Android games without IAP. Most users being cheap Android phone users. Downvote is another hard point for this. I am testing something lighter, and keep an eye on Discourse anyway.

I will share my analytics results and or details for this test with the developers if they PM me. Its the worst case possible with Android.

If there isn’t js heavy websites phones wont improve.

My opinion is that since Discourse is paying the performance price they should continue and make the best possible use of javascript. Make the javascript worth their weight.

An alternative simple render is the obvious workaround, they could encourage the opensource world to contribute to the codebase to help with the budget/resource limitations. Perhaps putting it in the roadmap and dedicating a small effort just to show the way, or whatever works better.

No. Computers always get faster (and smaller) over time. Feel free to look up graphs of the last 50 years if you don’t believe me.

Hell, look up a graph of mobile performance over the last five years if you don’t believe me.

You can certainly wonder how fast that speed will trickle down to $99, $50, $10, $5, $1 devices… but that it will happen has never once been in question by any educated person familiar with the history of computing.

The real issue is the arbitrary, utterly platform specific 500% performance penalty large JS frameworks pay on Android, feel free to review the entirety of this topic for specifics. Just scroll up and read.

1 Like

Your phone is getting faster, mine does, everybody’s does in fact.

But low-end devices enter the game much much faster than technology’s advancing.

So on average your readers’ phones do indeed get slower.

It’s like during baby boom years the mean age went down. Everybody’s been aging still, but the mean age went down, you see?

3 Likes

Hi Sam,

@wycats is doing really interesting work on first render performance (re-rendering is already blazing fast thanks to Glimmer). We also have issues with mobile (where CPUs are slower), and think this will help a great deal.

You’ve probably seen it already, but if not there is a video here:

3 Likes

I think the point here is that whatever is in the toppest-of-the-line Android devices today will be in thousands or millions of low end Android devices in a year or two. Phones may have gotten faster in general, but the bulk of users will still be on slow devices. And the OS itself is probably not going to be optimized for this “obsolete”-ish hardware by then.

1 Like

Do you think your hardware intuition might actually be prejudiced by Engadget-enlightened, SSD-boosted experiences you enjoy day to day?

It took Google/Motorola deal to jerk the low-end market into its current state from the Gingerbread age, and know what? It’s still 2013 now and will remain 2013 through the next year unless another black swan event happens.

4 Likes

In a nutshell, the fastest known Android device available today… performs 5× slower than a new iPhone 6s

Numbers can sometime be deceiving, and I wonder if this is one of those instances. Not that the facts aren’t presented and valid. But it reminds me when I see the value of ‘1’ and ‘2’ and the statement, “the new thing contains 100% more than the previous thing!” It’s technically true but relatively speaking not a big deal.

Back to the numbers, your talking about fractions of a second that are hard to digest at times from a standpoint of actual usage. I’m sure the argument can be made that all of these ‘fractions’ add up to a slower experience, but it is not as if we are talking about multiple seconds slower per interaction for the category of ‘modern device vs. modern device’. I’m not trying to deny the evidence because it exists, but rather does it really make a noticeable difference?

This problem was very predictabel in 2013.

There is a strange expectation that mobile devices will follow the same path as desktop without considering the unque hardware constraints.

And for some period of time to come it most likely will get worse before it improves because the last few billion people on Earth without a smartphone will get a very low cost low performing device and represent a huge chunk of the user population.

In addition, for some time to come there is no breakthrough in battery technology on the near horizen and so no matter how many cores you add to the SOC, they will all throttle-down like crazy after the benchmark is over. Normal every day usage with some multi-tasking will actually show a worse user experience than a benchmark might suggest.

Apple has been in the unique position of innovating on a premium product for which the market realities of most of Planet Earth do not exist. The gap will widen at least until nobody is left that might buy a premium smartphone which has to happen at some point. All markets hit the end of the S-curve.

Presumably within 10-20 years there might be some sort of nano-tech battery breakthrough that will permit Moore’s Law to do its thing again…

7 Likes

We’re talking about the fastest android device being slower than the slowest iOS device. If you look at a more typical mid-range Android device (something with about the processing power of the iPhone 5), you’re looking at the difference between ~300ms and 1.5 seconds (at least). You’ll see a big difference in bounce rate simply based on those load times alone.

5 Likes

Obviously lots of people run into this brick wall and go down the Native App road, which works.

Off the top of my head, there might be two ideas to consider.

  1. Transpile the JS to some more performant subset along the lines of ASM.js but without specific browser support there might not be a useful subset.

  2. Transpile the JS to C++ or Java to get a native app out of JS. Keeps a (mostly) common code base (since substitutes for dynamic code would be needed). Surely somebody has a product like this? (It would work like .NET Native in Windows 10)

3 Likes

This is like someone making an argument that a 5.0 V8 Mustang is faster than a basic older mini-van. That’s a completely obvious statement. Junk will be junk and slow. Period. I think it’s more important to make strict comparisons between the elite Samsung devices (for example) and the iPhone products. Then the delta can be assessed, but my point was the delta might be over amplified in millisecond values compared to the real experience when comparing similar devices across platforms.

But it’s not a race.

More like you’ve been asked to organize transport of medicine to earthquake victims and you have Mustangs and Mini-vans to work with and you ask your organizers for more Mustangs and what you will actually get allocated to you in the next few years is a shipment of golf carts.

Ever notice in this analogy, the right device, hardware, car, etc. is picked for the job? In this same light, if it is a potentially life threatening situation, don’t plop code on a crappy Android tablet that was on sale at the local big box store for $69 last weekend.

I’ve worked in the medical industry and there is tight scrutiny and regulations around everything including hardware and risk analysis is always done. This would be determined ahead of time and have to be taken into consideration to avoid legal ramifications due to injury as a result.

Yep, the next two years will be the fastest we’ll be adding people to the internet. Most of those will be on cheap mobile phones. No way these $50-$100 devices will be as fast as your shiny iPhone 6s, even in 2 years.

Nice graphs: On the future of the Internet and everything | Asymco

Compare scores of the low end SoC of the Lumia 550 (December 2015) with your high-end phone: http://www.notebookcheck.net/Qualcomm-Snapdragon-210-MSM8909-SoC-Benchmarks-and-Specs.148336.0.html

It gets 1/3 the Octane score of an iPhone 5S (September 2013). If x64 is any measure, Chakra will score an additional 2-3x worse than those Android benchmarks I linked to. And the Lumia 550 is even a 2-3x too expensive phone for a lot of non-westerners. Those cheap Firefox OS phones you keep hearing of, have the speed of the iPhone 2G/3G.

1 Like

Exactly.

Its always hard to predict the future. But we can look at it this way, the last 50 years had moore’s law, the cost of transistor has been dropping price roughly every 24 months. We dont have that anymore, 28nm was the last one that bring the cost down.

So on the lower end of the spectrum, They could add better screen over time, battery or what so ever, may be better IPC etc but the performance increase will be very very slow over the next few years in the lower end sector. Not to mention the cost will likely shift to 4G, WiFi first. (
Display, 4G, WiFi Speed are much easier to sell and Market )
Oh, and if cost were allowed they will get more CPU performance 2 - 3 years down the road. Guess what we they do? Add another 2 Small core. Which in Discourse case doesn’t help.

Things will definitely improve over time. But dont expect the CPU performance to improve much in the lower end for the foreseeable future.

3 Likes

I am a developer and can’t only agree @Chad_J_Dupuis: Mithril will probably solve your performance problems. And it’s also very easy top learn and straight forward to use.

We are about 2.5 years into a 10 year mission, so you would have to argue that in 5 years the situation would be identical. It is definitely true that Android JS perf has lagged massively behind iOS even in the last 2 years versus where I predicted they would be (closer to iOS, perhaps 1.5x slower, not 5x slower as it is now), so it is a question of whether that graph line shifts up or down or stays the same.

It is also encouraging that Apple is becoming so dominant, at least in first world countries, but I hope we have a better JS perf story in Android over the next 5 to 7 years, ideally in 2 years or less.

4 Likes

Option 1
Stop using 1 monolithic JS framework to do everything. Use RactiveJS for your UI and databinding. You could then run it on the server side using NodeJS and send Android users a pre-rendered view. Desktop and iOS users could get your full javascript powered version. Both versions would use the same code since your using NodeJS on the backend for Android users.

Ractive takes a different approach from Angular and other frameworks. Your template lives as a string, and is parsed (on the server, if needs be – Ractive is isomorphic) into a tree-like structure that can be transported as JSON. From there, Ractive constructs a lightweight parallel DOM containing all the information it needs to construct the real DOM and set up data-binding etc. In my view, this is a more hygienic approach. Parsing the template before the browser has a chance to has many benefits…

Option 2
Use asm.js. I don’t know the state of tooling for this but this could speed things up. But might make things a lot more complicated.

RactiveJS is not particularly faster than Ember: http://matt-esch.github.io/mercury-perf/ <-- whoops, pretty old benchmark

It’s even slightly slower on Safari. Probably it’s different on Android, since in Chrome it’s a slight bit faster (2563ms / 3835ms). But if you could run a lot of it server side, that might help a lot. Actual fixes on the Android side will have to come from Chrome, so Ember is not hitting all these optimization/deoptimization cycles. Then some additional changes to make javascript multicore (hint: look at Erlang/Elixir). But that would be way more longterm.

asm.js in particular is much slower on any browser that does not specifically support it. That it “works” everywhere is the best thing you can say about it.

Mithril.js seems to be the fastest kid on the block currently. But I don’t know if that’s just because it doesn’t do very much.

4 Likes

What about prerender.io for serving the app wIthout js? It’s free, open-source, and I believe there’s a Rails implementation.

As for me, I had no problems whatsoever using this site to post this very comment, and I’m on Android 5.1 on a note 3 (albeit overclocked to 2.5 Ghz, on custom kernel). I refreshed this page, and load times were very fast, under 1s. I think you may be exaggerating the problem.

btw, not only is performance of Discourse poor on Android. If you load this page with a warm cache it takes 3.5 seconds before you have anything on the screen on a middle of the pack laptop from 2011 (i3-2310M) running Microsoft Edge (Win 10 build 10240). The average Windows computer was 4.4 years old back then. That number has probably only increased. So most people see worse numbers.

Different issue; edge is literally no faster than ie11 in Ember. Try in Firefox or Chrome.

1 Like

It seems that Google realizes they have a problem:

Unfortunately their solution does not sound optimal for Discourse:

Third-party JavaScript code, for example, has no place in the AMP environment.

Since their current performance problems seem to primarily lie with their platform “partners”, I’d agree that the possibility of “real” improvement in the near future is slim. They still haven’t managed to address the Android update problems that have plagued them for years.

1 Like

We have a complex web app that is built on Backbone and Marionette. Our QA has not noticed any significant differences in how well it runs on Android vs iOS, except to say that iOS seems “smoother”, but not faster. Marionette provides a pretty good hierarchical structure for building a UI on the client and it is much lighter weight than Ember or Angular. However, compared to Angular the model binding is not nearly as good. I have not really used Angular, but I did research it to decide if we should switch to it.

I don’t have any idea how much of a rewrite it would be to switch to Marionette, but it might solve your problem without building the pages on the server.

That’s a ridiculously old version of RactiveJS. They’ve made a ton of performance updates since. Current stable version is .7.

Not entirely true. That security bug is for 2.2. Newer Android versions have a lot of core components decoupled from the base installation controlled by OEMs and carriers. Google has also tightened the Android requirements, giving them more control over Android.

Everything below 4.0 seems doomed, though, but the market share for those versions is rapidly declining.

Certainly seem to be some promising things going on there. At around 23:30 (and elsewhere) there is talk of doing things in a way that V8 will handle more sensibly (fewer de-optimizations). The likely results are far beyond my skills to predict. Did anyone else get the sense that this might make an appreciable dent in the issue? Or are these kinds of improvements marginal at best?

With regards to single core performance, http://www.androidauthority.com/fact-or-fiction-android-apps-only-use-one-cpu-core-610352/ (ok it’s an android “fanboy”) suggests, that todays mobile browser can use multiple threads. So your assumption is that your javascript can only be executed in single thread and is also the bottleneck?
Multiple cores and and especially the big/ little core design seems to be more energie efficient.

Javascript is not a multi-threaded language. All of the code on a single page must be executed on one core, it cannot be spread across cores (although it can move to a different core). The browser can use other cores for HTML/CSS/etc. rendering, image rendering, and other tasks, but it can’t use multiple cores for the JS code on a single page.

6 Likes

This is an interesting thread, and I have a few thoughts considering the future of JS app performance on Android.

I don’t think switching frameworks is practical. Ember was chosen because of the community’s maturity and the maintainability of its applications. Porting a codebase this large is unfeasible.

Although devices may not satisfy hardware requirements, we can trust that mobile browsers stay auto-updated with the latest features. Browsers that make up 97% of usage are close to the latest version of that browser.

Therefore, progressive enhancement that targets the latest browser features is a smart way to ensure performance improvements. (e.g. Web Workers, Service Worker, HTTP/2, etc…)

If a browser supports Web Workers, initialize a couple on first load and offload JSON parsing or AJAX requests to them.

Default to HTTP/2 for faster download of assets. Fallback to HTTP/1.1 if a browser doesn’t support it.

Since requests are cheap, HTTP/2 makes bundling an anti-pattern. This will let you prioritize requests, while giving you fine-grained control over cache. Use Service Worker to offline large parts of your app.

I have a feeling that HTTP/2 would give a dramatic increase page load performance.

3 Likes

Sure, usually the Java script code execution is not really multithreaded, also there are Web workers, which can use all CPU cores, see http://stackoverflow.com/questions/11871452/can-web-workers-utilize-100-of-a-multi-core-cpu.

Still the question is why Ember is bottlenecked by Javascript single thread performance, whereas for other frameworks that does not seem to be the case, which would lead me to the conclusion that today ember might not be a good choice for mobile web applications. Single thread performance is not going to increase for a long time anymore on mobile, whereas multithreaded performance will continue to increase for a much longer time.

See also YouTube
Ok biased again, but still …

Same exact performance issue with Angular, confirmed by a Google employee, in writing, in their bug tracker. Please read the entire topic before replying.

Also, single thread performance has increased dramatically on mobile over the last four years. Feel free to scroll up and read where this was cited, with data and graphs, several times…

you mean https://code.google.com/p/v8/issues/detail?id=2935 I guess, Sorry must have missed this in this long thread.
Sure that does look like a bug/limitation in V8. That is sad, but better single thread performance would only be a workaround, because it is only hiding the cost of the deops. And as we all know single thread performance is not going to improve for a long time, so relying on it is problematic.
BTW I guess the Intel based Android devices should be faster then, mostly tablets and the market share is probably minor, because the usually had good single thread performance in benchmarks.

An idea: Make the Discourse API a first-class citizen.

I’m specifically thinking of a professional micro-site and API endpoints which target Android developers: Key creation specific to client-side apps, true sample mobile Java code, etc. For example:

BTW with regards to multicore versus single thread performance, http://www.anandtech.com/show/9518/the-mobile-cpu-corecount-debate

was the article I had originally in mind.

1 Like

Then we could bet that iOS will kill Android in the following 10 years. :smile:

It is to an extent not such a problem because everyone else is in the same boat.

If Discourse runs faster than Facebook on the same Android device, the actual timings are irrelevant as the user is used to and expects the lag.

Android speeds are only going to increase and they are usable at the moment.
Just don’t look at your friend’s Apple and there’s no problem.

1 Like

I don’t think Angular is a good example of a performant framework, it’s fallen way behind some of the newer frameworks. Angular 2.0 has massively improved performance mind you. Here’s a good benchmark that pushes the DOM in terms of raw updates per second:

http://mathieuancelin.github.io/js-repaint-perfs/

1 Like

You’ve mentioned something like this in a few replies. Do you have an idea of how you would detect “ancient devices” or subpar performance? Seems like a tricky thing to detect / progressively enhance on top of.

True, if it is an ancient device running a current copy of Chrome for Android for instance. We already detect Android and send down half the data we send for iOS – so there is a way, and it is reliable. :pensive:

1 Like

Seems to me it’s an opportunity, albeit a forced one, to rethink what discourse needs to be on mobile devices. That’s not necessarily a bad thing.

If we were just talking about old devices with poor performance it would be difficult to justify a rethink, but since it’s new devices, and so many devices, it seems like a relatively easy thing to justify. The steps you take to accommodate slower JS performance on Android today will benefit not just today’s Android devices, but older devices, and also new underpowered mobile devices that will be driving the growth of the web, for years to come (as a billion new people find their way online).

You could really see this as the right problem to have at the right time.

1 Like

Is there a company that makes buying an older iPhone as easy as certain 3rd party companies used to make buying a used Mac? Perhaps an opportunity here?

Indeed. One example is Gazelle:

http://buy.gazelle.com/buy/used/iphone-4s-8gb-at-t

I would not buy anything earlier than the iPhone 5 though. 5s would be my budget recommendation.

I am trying out meta on the doogee x5, a 80usd android phone, topic page is a bit sluggish, but front page is quite good due to my optimisations a while back, composer is functional albeit sluggish

Biggest pain points are initial load and topic page

We have ongoing plans to improve both

  1. I reduced initial payload size yesterday by stopping rendering of no script on mobile
  2. @eviltrout is looking at splitting the initial js payload a bit so composer is only loaded as needed
  3. We are investigating virtual dom based components which will make the topic page very much faster
  4. Composer is sluggish to react when typing going to look at that one now
13 Likes

I’m now getting ~250ms on the 1.11.3 complex list test w/ Chrome 46 + Nexus 9 (Marshmallow)

Ember Version: 1.11.3
User Agent: Mozilla/5.0 (Linux; Android 6.0; Nexus 9 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.76 Safari/537.36

.---------------------------------------------------------.
|            Ember Performance Suite - Results            |
|---------------------------------------------------------|
|        Name         | Speed | Error  | Samples |  Mean  |
|---------------------|-------|--------|---------|--------|
| Render Complex List |  4.12 | 883.19 |      52 | 242.81 |
'---------------------------------------------------------'

Baby steps!

3 Likes

Interesting, the Nexus 6p gets 280ms on Chrome latest (stable), so that’s a minor improvement as well and it appears to be Marshmallow (Android 6) related.

1 Like

Your data doesn’t cover a very important problem: Apple cheated to get their performance.

Architecture lock-in is a very real software problem. It’s one reason for using Java code in Android and is the reason why Google specifically targets 3 different ISAs. While the ARM lock on mobile isn’t near as strong as the MS and x86 lock on desktops, it is still very strong.

Even when better chips were available (eg. MIPS proAptive which was faster, half the die area, and lower power than A15 with better coremark performance), the market still chooses ARM showing that the lock-in is real. With ARM, that’s not a terrible situation because you can either purchase their designs or create your own as long as you pay them – companies definitely prefer this to x86.

If you aren’t familiar, it takes 4-5 years to create a new CPU architecture, test/verify it, and tap it out. Not even the most powerful players in the game like Intel or IBM can beat this (they instead overlap design teams to keep the changes flowing). ARMv8 launched in Oct 2011.

Apple managed to not only tap out, but ship a chip using the new ISA in TWO years. Either PA Semi had super-engineers or something strange is going on. ARM’s simplest design (the A53 – based on an existing micro-architecture) didn’t ship until late 2014 and their “high-end” A57 didn’t get out the door until 2015. A57 is a poor design with a lot of issues indicating it was rushed out the door.

A72 (shipping mid 2016) is a fixed version of A57 with a lot of improvements that had to be sacrificed to get A57 out the door and it should come as no surprise that the fixed version finally starts to catch up with Apple’s design from 2013. Qualcomm is also taking 5 years to launch their ground-up redesigned processor (Snapdragon Kryo). Samsung’s custom core also doesn’t arrive until next year. Denver managed to catch up part-way, but only because it’s a transmeta-based design and the biggest changes were to the firmware (and it’s been in progress for years – originally intending to get around x86 patents until they struck a deal with Intel).

If you ask around at these companies, pretty much everyone was beyond shocked when Apple not only launched with only 2 years time, but the chip was actually good. After all, ARM would run afoul of quite a few laws if it could ever be proven that they had given Apple a head start. How did Apple do this? How did a third-party launch 3 years before ARMs own A72?

Apple’s changed ISAs before (POWERpc to x86) and it’s not easy (especially when all the code is native). Apple also has some loose ties to ARM since they co-founded the company in the 90s (they sold the shares in the early 2000s, but that’s not the only important thing in business). With x86 being a one-horse race (AMD has basically been out for years), Apple needs a different desktop ISA.

MIPS was on the market and is a great ISA. The only problem is switching from ARM would be a big problem. Instead, it would be nice if ARM would simply clean up their ISA which they did. The resulting ISA is extremely similar to MIPS and perhaps explains why ARM was so keen to spend hundreds of millions to get access to the MIPS patent portfolio after Imagination bought them.

The rest is supposition based on these (and a couple other) facts. Apple was one of the big companies rumored to be looking at getting MIPS. They start building a micro-architecture with a MIPS-like ISA in mind. They then come up with a cleaned-up ARM ISA that is very similar to MIPS, so they can continue work and go either direction late in the design game (in fact, they could probably design both control blocks at the same time as the micro-arch and the uncore would remain unchanged).

All thats left is to talk to the ARM guys. Either ARM loses one of their biggest, most influential customers or they negotiate to adopt the new ISA. Apple has around 18 months to finalize and tap out with another 8 months to ramp up for the new launch. This also explains ARMs rushed A57 and the much longer delay by the other companies.

The big win for Apple is that they outmaneuver their competitors for at least 2 years and most likely 7-8 years.

7 Likes

Cordova + Crosswalk is a nice way to work around that problem. Cordova makes it possible to build apps with just HTML + CSS + JavaScript (and you can optionally add native “plugins”), Crosswalk is a plugin that makes the app use an optimized recent version of Chromium instead of the built-in browser.

Android built-in browser performance is abysmal at best, recent Chromium via e.g. Crosswalk gives a significant performance boost.

Your codebase needs to diverge very minimally, which is to say you can ship the same code for web and app with something like if (window.cordova) { ... } -wrapping the platform specific code (if any is needed at all).

They also turned out a smartphone that beat out their entire competition (blackberry) back in the day. Normal chip manufacturing usually falls under the schedules of the market; You don’t want to waste time researching a faster chip if nobody’s buying. The fact that Apple turned around a chip in 2 years is just a matter of want.

This isn’t a conspiracy, it’s the fact that someone was willing to pay for the effort. Google benefits from the fact that they use the same chips and can aim for a cheaper market using whatever gets pushed in the ARM division. To put it another way, If Google was really keen on raw performance, and put as much emphasis on, say, AMD, you’d probably see a pretty competitive mobile chip from them as well.

Really though, the issue is built in browsers on android suck.

Where are the profiling graphs? Final numbers mean very little without the flame charts. Where are the actual bottlenecks in the code?

1 Like

Wow, thanks! I actually wasn’t expecting this from my quasi-rhetorical question; I’m so jaded by speculations.

To look at this laterally, would it not make sense for Discourse to either hire devs, or internally, focus on optimization of the framework they rely on? If performance is an issue, which it seems to be, give these threads, would it not make sense to appoint people who’s sole job would be to deal with this; Be it internal performance or upstream?

Having a Discourse fork wouldn’t be a bad thing for Ember.

We work on performance all the time, we upstream fixes, report bugs and so on, in fact front page performance was not at an acceptable level till I moved us to raw rendering for each row.

“Just hire more devs” as appealing as it sounds has the real world constraints of … hmmm … money.

All our devs are able to work on this problem, but we need to balance feature work and customer support and hmmm all those other things it takes to run a business :slight_smile:

The big :tm: problem (for Google) though is that somehow Safari has an about 40% edge over Chrome when it comes to running Ember stuff… on top of that Apple somehow managed to put Desktop level performance into people’s pockets with the 6S.

1 Like

I realize how client-esque that sounds. What I mean is, if it’s a major concern, it needs at least one dedicated specialist. Hospitals don’t usually have their GPs doing cancer research between operations.

Not even every iPhone user is running these chips so, while this may be the future, it’s certainly not the present. Fixing Ember render waves is a bigger win regardless of chipset; Hell, it’ll help folks on laptops get a few more minutes of battery life. Everything counts.

1 Like

I recently bought a Doogee X5 cause I like pain. I found meta completely usable on it. First load is painful, topic show could be faster but it is still usable. I am the Discourse performance tzar, I spend a lot of time on performance, it does speak a lot for a company of ~7 full time staff to be able to have this level of focus on performance. I blog about this stuff, I am constantly looking at performance.

I can not afford to work on the Android Chrome 40% performance hole compared to Apple Webkit, or somehow magically make ARM chips on Android up-their-game. These are things Google need to focus on and they have a few more than 7 developers :slight_smile:

We also can not afford, at the moment, to ship someone into a sealed room to redesign a brand new front end framework for us, even if we could afford it, it would end in tears.

Instead, our approach is to rebuild our boat while its floating in the big treacherous ocean. We want to keep sailing, we do not want to take it to the dock.

12 Likes

If you are suggesting that the reason Apple could bring a 64-bit ARM into production a couple of years before anyone else was that Apple blackmailed ARM to adopt an Apple ISA then I think you are wrong, and I will explain why. And I apologise for not understanding your post if that isn’t what you are suggesting.

To get one thing out of the way, I believe Apple did have a significant input into the 64-bit ARM architecture. And I’m certain they were not the only large company that had significant input. ARM works like that. They talk to some “partners” who are both interested in their future products and who may guide ARM to improve those products.

The whole issue of the timing of the introduction of ARM’s 64-bit is interesting. I’m sure they were working on their 64-bit architecture for a long time, initially to prepare the way for when they would need it, and then for real. I’m guessing here, but it seems likely that ARM’s timescale was dictated by when they thought 64-bits would be needed for consumer products, rather than servers. Having now observed a couple of word-length transitions (16 to 32, and 32 to 64) the transition always takes longer than expected, and then happens very quickly. The transition takes longer because architects come up with mechanisms to extend addressability without increasing word length (c.f. PDP-11, x86, ARM A15). However, once the longer word length machines are available, and the software is available, the transition happens quickly - because software development is easier.

I believe that Apples move to 64-bits so quickly was a huge surprise to their competitors. One reason that ARM’s processors took so long to develop is that ARM didn’t think they were needed so early. After all, why would you move the 64-bit if you could wait? Wouldn’t 64-bit just be bigger, more power hungry and slower (after all, you need more memory bandwidth to support those 64-bit pointers)? Apple came to market with a processor on which (the important) 64-bit software ran faster than (the same) 32-bit software - because Apple exploited the extra bits in a pointer to significantly improve the performance of their run-time system (see mikeash.com: Friday Q&A 2013-09-27: ARM64 and You).

Apple design capability extends not only to the processor (the design that you could licence from ARM) but also to the whole SoC and its software. This means that Apple can make tradeoffs which other chip companies cannot - for example, playing off cache-size against clock frequency, playing number of processors against cache sizes. Knowing how your software works makes a huge difference here. If you are not vertically integrated this is very difficult to do. To optimise across the system you would need deep cooperation between (for example) ARM (processor design), ST (SoC) and Google (Android).

If we look at the history of the Apple SoCs we can see increasing amounts of Apple’s design capability being deployed over time. Apple A4 (March 2010) used an ARM A8. Apple A5 (March 2011) used a dual ARM A9 with - I believe from the die photos - an Apple designed L2 cache. [This would be perfectly possible under a standard ARM licence as the A9 processor pair have a standard AXI bus interface to the L2 cache]. With the A6 (September 2012), Apple introduced their own 32-bit processor design, Swift, followed a year later (September 2013) by their first 64-bit processor (Cyclone), with Typhoon and Twister following.

A full implementation and verification of the ARMv7s architecture is pretty complex - there is a huge amount of cruft. There are also parts of the architecture which are difficult to implement very efficiently (e.g. the conditional behaviour which looked like a really good idea back in the 1980s). It’s possible that Apple were able to back-off on some parts of the microarchitecture in the knowledge that they didn’t affect (Apple’s) performance much. But Swift remains a very impressive processor; if I recall, it was earlier into production and higher performance than ARM’s A15.

I don’t think an ARM-V8 implementation is much harder than a V7s. Especially if you are judicious in your choice of what to implement (and hence verify). I suspect that you can choose to have only user mode available in 32-bit mode, along with a 64-bit user mode and the rest of the exception levels in 64-bit mode. I don’t know whether that’s what Apple have done, but it would speed up the production of a 64-bit processor. By the way, ARM have not chosen to do this for A57 and A53 (I have no certain knowledge of more recent 64-bit processors), so I assume they have a harder job than Apple.

So, to summarise. I think Apple (and others) had input to ARMv8. I think Apple took (in particular) Qualcomm by surprise when it introduced and exploited its 64-bit processor as soon as it did. So one reason for Apple’s lead is that they chose to move quickly and allocate their resources accordingly. I also think Apple have been guided by their deep knowledge of the software that runs of their products to pay attention to the areas which matter most, and Apple have probably chosen to develop only a subset of the functionality that ARM are developing. Finally, it is probably the case that Apple are better at processor development than ARM. Don’t get me wrong - ARM are a great company - a great IP licensing company.

6 Likes

Even the iPhone 6 is substantially faster than anything on Android. The 6s widens the gap even further. Android handsets are mostly competitive-ish with the iPhone 5s, if they are very new.

Also @samsaffron it is not a 40% performance hole, it is 3x to 5x performance hole. 300% to 500%. Not 40%. We mitigate this by sending half the posts / topics to Android, so that cuts it to 150% - 250%.

More specifically:

  • When comparing devices with similar Geekbench single threaded scores, like the iPhone 5s and the Nexus 6p (both get around 1350) the performance difference is 1.6x in favor of the iPhone 5s. Of course one of those devices is from 2013, and one is from two months ago…

  • When comparing current flagship devices, it is not even close – the iPhone 6s is almost 5x faster than the Nexus 6p on render complex list in Ember.

1 Like

I guess that is my point Apple have a huge edge CPU wise and there is an artificial 40%-60% percent that can be made up in software just because of constant deopts in Chrome

I think this discussion is treading into the wrong direction. It does not matter what future will hold or what the current state-of-the-art is. The focus should be on what is out there. Unfortunately, this means that you must assume T-3 years (i.e. 2012 for now) performance metrics for both mobile and desktop. On the other hand this means there is an opportunity to optimize Discourse for any user on any device, including desktop, which means increased responsiveness and battery life for anyone.

Sidenote on devices/CPUs/market share: Single-core performance is unlikely to improve massively. It’s cheaper and more efficient to pack more cores into a chip than maxing out existing ones. This approach is not followed by Apple since they seem to optimize on a dual core stack (which works well for them). And while iPhones are #1 in the Western hemisphere, there is not only a shortage in any other region, but also a gigantic tail of other devices – keep in mind that Apple is not the market leader, it’s just the iPhone. And this includes a huge amount of older iPhones including 4s and 5/5C which were sold for a long time even after the successors (and their successors) were announced (and are still selling in non-developed countries to target the lower-price market) – which means lower performance for a huge amount of population. Also mind that market share is not equal to mind share.

As for Discourse, I’d like to see improvements on the overall performance before or while thinking about a “lite” version. While I like a lot what I see so far, there are opportunities for reducing the workload on slower/older devices (again, both mobile and desktop!) without thinking about a greater architectural change in Discourse:

  • Update Ember to v1.13, which includes a new renderer similar to what React does. I expect this to help a lot with Discourse. [currently Ember v1.12 is used AFAICS]
  • Update Ember to v2.2, which (in addition to performance improvements) removes unnecessary cruft and thus lightens the load on network, memory and CPU. You can even use jQuery 2 with Ember v2 to further reduce amount of code and improve performance! (Ember upstream dependency update: https://github.com/emberjs/ember.js/pull/12321)
  • Pre-compile partials on the server
  • Delay everything that is not needed at render time (this may exclude pre-loading smaller payloads over the network but they should not be rendered unless really needed!) [partially done]
  • Do not render anything when a tab is in background, agressively throttle polls (both local and over the network) [partially done]
  • Reduce complexity of markup and layout (so both HTML and CSS), think about creating smaller, more flexible components [partially done, always optimizable]
  • Use push (through SSE or Websockets) instead of polling [this may be done already, no idea]
  • Be less agressive with updates, i.e. updates less often (let the browser GC and the CPU go to sleep) and maybe do not update at all but show indicators that something has changed and load the new data on user request only (i.e. by pushing a “load more” button) – this can be improved to heuristically determine if the computing ressources are available to update the site in-place vs. just providing indicators.
  • Think about parts that can be out-sourced to a (couple of) WebWorker(s) so multi-core CPUs are used more efficiently.
  • Reduce amount of data and generally optimize memory usage. Pushing less data through the hardware always makes a system more efficient.

While don’t know of what the team has done until now or what can still be optimized code-wise (I didn’t really look through the sources :confused:), I hope this list provides some points you can think about. Especially the Ember update should help!

3 Likes

Well, let’s check what happens on my blazing fast Skylake desktop PC and 64-bit Google Chrome latest stable:

http://emberperf.eviltrout.com/ render complex list (this test is the most representative of Discourse real world performance)

Ember version 1.11.3 – 39ms, 16.5% error
Ember version 1.12.0 – 46ms, 15% error
Ember version 1.13.10 – 97ms, 191% error
Ember version 2.0.2 – 94ms, 181% error
Ember “latest release” – 72ms, 29% error
Ember “latest beta” – 77ms, 26% error
Ember “latest canary” – 79ms, 32% error

Of course on Android you need to multiply these numbers by 5, on iOS multiply them by 2 (for iPhone 6s) or 3 (for iPhone 6).

We are at Ember 1.12 now, and moving to any later version would cause us to be almost twice as slow.

1 Like

Nexus 6p (flagship android) vs Doogee X5 (ultra cheap) vs iPhone 6 (prev generation apple), initial page load:

13 Likes

Interesting. Is this because the new Ember renderer is not optimized, or because the test suite / Discourse is not optimized for the new renderer? The Ember v1.3 post mentions the issue with too many observers (or keeping track of those), which (AFAICR) @sam mentioned a couple of times in this discussion as being an issue of Discourse.

That benchmark actually has no observers and is based entirely on rendering a bunch of nested views.

The glimmer engine in 1.13 is focused on re-render performance, not initial performance which is benchmarked in that test. Since then, the Ember team has been working hard on improving initial rendering performance and we’ve seen some minor improvements in the latest canary which is good.

I should add that for the vast majority of Discourse templates don’t have any problems and we could render much slower without anyone ever noticing. It’s mainly a problem on our topic lists and topic views. We’ve hacked around the topic list by rendering raw handlebars templates.

The topic view has a lot of string rendering now which is tricky to maintain and we’ve maxed out the performance gains on that approach. We’ve got a promising idea for improving render performance on that view though with a bunch of custom code. So, the good news is even if Ember performance doesn’t improve we will likely be able to make Discourse much faster.

10 Likes

Ember Version: 1.11.3
User Agent: Mozilla/5.0 (Linux; Android 6.0; Nexus 6P Build/MDB08L) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.76 Mobile Safari/537.36

Using Chrome stable on my Necus 6P, I get the following. I’m using Ember 1.11 to try and match the benchmarks run in Jeff’s original blog post.

Any idea why the error is so high?

.------------------------------------------------------------.
|             Ember Performance Suite - Results              |
|------------------------------------------------------------|
|            N...........| Speed | Error  | Samples |  Mean  |
|------------------------|-------|--------|---------|--------|
| Render Complex List.   |  3.49.| 414.73 |      43 | 286.72 |
| RCL (HTML)............ |  3.32 | 530.24 |      40 | 301.63 |
'------------------------------------------------------------'

The error is so high because V8 screws up optimizing the JS for Ember (and Angular, etc) and causes constant deoptimizations. You can read the bug reports already filed on this in their bug tracker for more info.

yikes, that’s a bit of an oversite for the company that made angular, though obviously different divisions of the company. the “mean” looks alright though if i can be trusted with such a high error rate. At least we’re finally beating the iphone 5 :confused:

This might be a difficult task to find someone prepared to take on this task…

…but perhaps someone on the Chrome team would be prepared to be a “Chromium Perf Sheriff” for a specific test for this issue.

You can see the Chrome Performance Dashboard, this highlights regressions and improvements, this even highlights the specific (range of) code that caused the improvement or regression.

Higher visibility of the performance issue would be beneficial and the automated testing across standardised Google Android devices.

It’s not just about seeing improvements it’s also about ensuring there are not regressions and things don’t get worse.

1 Like

Check out the single core performance of the upcoming Snapdragon 820

Source: http://www.tomshardware.com/reviews/snapdragon-820-performance-preview,4389-4.html

Not quite up with the Apple A9, but a hell of a lot closer than anything we’ve seen so far. Maybe there is hope for Ember/Js frameworks on Android after all.

One would hope. The Anandtech numbers are a little less encouraging when they tested against actual JS benchmarks.

http://www.anandtech.com/show/9837/snapdragon-820-preview/3

Still, better – Qualcomm basically sucking for the entire year of 2015 did not help matters.

true, but step 1 is getting the single core performance up. There’s still a lot of work to do on the software side but hopefully this starts a trend of Android chip makers keep the single core performance up and trying to get to parody with Apple’s chips. From there there’s still plenty of work to do in V8, but hardware needed to be there too.

Half of the puzzle is getting better, at least. Now it’s Google’s turn.

Let’s hope they get ‘parity’, although ‘parody’ would be sufficient to improve the situation at this point :smile:

1 Like

The real solution might be a combination: render some stuff server-side, use an unbound renderer, and use web workers.

This kind of problem gets me wondering…

… Is there such a thing as a trust-worthy bug / feature implementation bounty programme that could be used to target this V8 / Chrome / Chromium issue?

There would need to be some quite specific requirements.

I would throw £20 (~$30 USD) in personally without even thinking about it…
… the number of man hours lost reading this and other threads and dealing with user complaints must add up.

It would mean more eyeballs on the problem - I just wouldn’t want the additional attention to be too distracting for any Google developers that might be already working on the issue - although I don’t believe there is any commitment of solid time on the issue.

This is basically what Angular 2.0 is doing, and to my knowledge it’s more or less what React does. We’ll see if Ember jumps on that particular bandwagon, Fastboot is still very much a work in progress but who knows what they have planned.

I cant help but laugh at the post you were replying to, suggesting Apple cheated, or buy into ARM. ROFL. Then i saw he got 6 likes… people do believe this shxt. Sigh…

The ARMv8 implementation design started in 2007, picking up speed in late 2009. Announced to the public in late 2011, Apple shipped their first 64Bit Soc with iPhone 5S in late 2013.
This doesn’t mean ARMv8 were unknown to the world before then, like you have said, ARM generally consult and work with their partners on many things; If not everything, that is why they have many implementation and reference design to suit their customer’s needs. And this is not only on hardware, but software as well such as compiler programmers.

Any other company with an ARM architecture license, would likely to have been involved in the design stage. And if they could be bothered to actually work on 64bit design all the way from the start, there is no reason why they cant ship their SoC in 2013 or 2014!

So why were everyone on the market 2 years late on ARMv8 64bit? Well it take 1- 2 years to get an implementation of an Architecture, and then another 6 months to tape out. Even ARM didn’t have their ARMv8 reference design finish when Apple shipped A7. It wasn’t just the competitors were shock, it was a shock to ARM as well. Because NO ONE, absolutely NO ONE think they will need 64bit, not yet, not then, not on a Smartphone.

Qualcomm’s Anand Chandrasekher actually got demoted for saying 64bit is not needed. He was properly right, but Qualcomm was under pressure from their customers / shareholders to offer 64bit chip so he was not allowed to say that. ARM was under intense pressure as well, and to fast track the whole ARMv8 reference design.

1 Like

We agree!

I claim no first hand knowledge but I am inclined to agree that ARM were also surprised - if the the input from their customers had been that 64-bits was needed ARM would have moved faster. This illustrates a problem that the ARM ecosystem has - the ecosystem is not a substitute for a single vertically integrated company when it comes to understand the full stack. Perhaps Qualcomm understand phone software well (although I doubt it) - but certainly they didn’t understand the potential of 64-bit, and the other phone chip companies who are ARM licensees have no depth in software.

Another of your points - Apple decided to move early - other could have but didn’t.

Android’s browser not only suck in JS performance, but they suck at CSS animations and have extra oddities to deal with (VH for instance as the top bar collapses). Its baffling how great Chrome is on desktop, and how little they care about it on mobile. Clearly its doable on mobile because iOS does it. That’s why when you compare the top Android tablet to a 2-3 year old iPad on webbrowsing the iPad almost always wins (unless it runs out of memory).

1 Like

There might be hope for Android’s future. According to the news below (from Ars Technica, which I find much above average clickbait), Google plans to also build its own processors (and hired semiconductors engineers from PA Semi). Hopefully they’d have better single-core performance than Qualcomm’s octacore CPUs:

4 Likes

Little late to the party but we’ll take it! :smiley:

Like a good internet citizen, I pass this link along without having made a real attempt to fully read it or understand it:

B3 generates code that is as good as LLVM on the benchmarks we tried, and makes WebKit faster overall by reducing the amount of time spent compiling. We’re happy to have enabled the new B3 compiler in the FTL JIT. Having our own low-level compiler backend gives us an immediate boost on some benchmarks. We hope that in the future it will allow us to better tune our compiler infrastructure for the web.

B3 is not yet complete. We still need to finish porting B3 to ARM64. B3 passes all tests, but we haven’t finished optimizing performance on ARM. Once all platforms that used the FTL switch to B3, we plan to remove LLVM support from the FTL JIT.

5 Likes

Not very relevant to Android. Always nice to hear about JavaScript getting faster though :slightly_smiling:

The biggest short-term improvement to Android speeds is coming from Discourse itself, in the form of vdom:

This is going to be huge, and I really hope the official announcement (@eviltrout will do a proper writeup in a blog post) sparks further debate about the state of JS on Android and what’s being done to improve it on every part of the stack – Discourse, Ember, Chrome and Android.

4 Likes

This will make the first hit on Discouse (where we download a lot of js, parse and compile) 500% faster and normal JS execution 10% faster.

But looks like it’s only for OSX for the moment.

Only relevant in that it could make Android seem slower yet again, in comparison to Apple :slightly_smiling:

speaking of work on V8 engine, Addy Osmani posted:

New V8 JavaScript performance improvements
Object.keys() is now up to 2x faster
ES6 rest parameters are up to 8-10x faster
Object.assign() ~ as fast as _.assign()

https://plus.google.com/+AddyOsmani/posts/gAKsdwsNgZN

more detail to come

8 Likes

Robin’s blog post about switching out Ember rendering for vdom on topics is now live, and has benchmark numbers for Android

9 Likes

In a bizarre turn of events, the Samsung S7 Exynos (non-US) is quite a bit faster here, about 170ms on complex list… but the Samsung S7 Qualcomm Snapdragon 820 (US) scores a mind-bending 750ms? I have to assume it’s something about Chrome / Android that can’t deal with the new CPU?

Scores can be found by following this tweet:

Good Job Android, you caught up to the iPhone 5s! Well, sometimes! :wink:

6 Likes

I got 269ms & 259ms with s6 edge + (note 5). Is that an usual score for this device? Just went through a system software update last week.

Android 5.1.1 Chrome 49.0.2

Wouldn’t the main architectural difference ie less cores higher single threaded perf be the main factor at play here? It seems less like an issue of android catching up to Apple than ember not scaling well across multicore devices.

Here is a deeper dive into the differences between the two different processors you might get in a Samsung S7:

Differences in scores dependant on task, note the Sunspider score.

3 Likes

s/ember/javascript/g

You can use webworkers, but it’s tricky to isolate method and pass the messages arounds.

1 Like

I think you’re analysis and fear of this only being able to solved via hardware is off. Your real problem is that javascript is a single threaded language and most JS engines are still single threaded. The multiple cores on the typical android hardware is ill suited for javascript. But this can hopefully be mitigated in two ways: 1) there are already some javascript extensions that expose multi-threading, if google adopts one of these and you were willing to rewrite some of your javascript to be multi-threaded you could boost performance. 2) If you’re really lucky google might start to embrace multi-threading in the javascript engine itself, giving you a performance boost without needing to do any work.
The move to multiple cores is something that google has been aware of and pushing for a while now, and google knows that the future is javascript, and that future must involve taking advantage of all of the capabilites of the platform. I think you’ll find that a multithreaded JS engine and language extensions are coming.

1 Like

It’s plenty fast on iOS. Benchmark it yourself if you don’t believe me. Come back and post numbers if you dare :wink:

Apple is a full year ahead in hardware, because Qualcomm had an awful year:

I don’t think there’s any way to sugarcoat this, but 2015 has not been a particularly great year for Qualcomm in the high-end SoC business. The company remains a leading SoC developer, but Snapdragon 810, the company’s first ARMv8 AArch64-capable SoC, did not live up to expectations

Even the 820, which is better – thankfully! – is not even as fast as the old iPhone 6, much less the new 6s. And the iPhone 7 will be shipping in less than 6 months with an even faster CPU.

This idea that there is some magic in multithreading is just not backed up by benchmark data for typical apps outside of video encoding. If your cores are all slow, it doesn’t matter if you have 24, 48, or a million of them.

3 Likes

I saw [this discussion on Hacker News] (The Chrome Distortion: how Chrome negatively alters our expectations | Hacker News) today, where one of the points made regarding the performance of various JavaScript frameworks in V8 is that Ember is apparently not as good as others, at least according to some benchmarks. The discussion is about the post The Chrome Distortion: how Chrome negatively alters our expectations by Chris Thoburn.

5 Likes

Thanks for the link, but allow me to rant against the authors. If you want to post one JavaScript benchmark, why on Earth would you post SunSpider?

ArsTechnica also uses other browsing benchmarks together with SunSpider; surprisingly, all three show a smaller but consistent advantage for the Exynos (using SunSpider, 532 ms vs 632 ms; Octane has them at 9143 vs 11115). Look yourself for complete tables and comparisons with modern iPhones.

Dare we hope that Android N might fix some of these issues? Obviously this still won’t help most of the existing 1 billion android users…

It should be noted that not only is Apple ahead in hardware; they’ve also been cranking away on their JavaScript engine as well. The latest is an even faster replacement for their LLVM backend with a new B3 backend.

You can read all about it (if you’re into that sort of thing) at Introducing the B3 JIT Compiler.

Even though Apple gets knocked a lot for apparently not focusing on web stuff as much as they could, the combination of their processor designs and their work on JavaScript Core gives them a substantial advantage with JavaScript applications for the foreseeable future. WWDC is next week; should be interesting what other web goodies get revealed.

8 Likes

[quote=“lukescammell, post:236, topic:33889, full:true”]
Dare we hope that Android N might fix some of these issues? Obviously this still won’t help most of the existing 1 billion android users…
[/quote] Yeah I was wondering about Android N, too. Who knows what percentage of existing Android phones will even have the chance to run the new OS given how slack many carriers are with updates, etc?

1 Like

Meh, doesn’t matter. Brand new OnePlus 3, Snapdragon 820, Chrome 51, Android 6.0.1 and it does 400ms on complex HTML. Disappointing, disappointing.

Phone’s great for £309 and really doesn’t feel that slow. I’m starting to think it’s just Chrome being utter rubbish compared to safari in mobile…

Android JS perf is somewhat less relevant since we switched to a raw vdom renderer in 1.5 final for the topic page and the header, which provides 6x-8x the performance on slower Android devices over Ember.

The OnePlus 3 is a great device! Still, it will be nice to see if Android can close the considerable JS performance gap in 2017, with whatever new CPUs are coming out, perhaps Snapdragon 830?

4 Likes

Wait, I did not realize snapdragon 820 does about 2400 on geekbench single core putting it, in theory, within range of an iPhone 6s for single core perf! So yeah that is hugely disappointing.

The snapdragon 821 appearing in this years Nexus devices will be about 10% faster still, but then the iPhone 7 this year will likely be considerably more than 10% faster than the iPhone 6s as well…

4 Likes

I can tell you that the 820 doesn’t NOT feel in any way shape or form slow in Android 6.0.1 on pretty much any other app I have thrown at it.

A heavy user iPhone 6s friend was impressed after using it for an hour or two and commented how he thought Android was supposed to be difficult to use and slow, saying he thought it made the (admittedly year old) 6s feel sluggish in places.

I’m just reporting what was commented to me 3rd hand as I don’t have any recent iPhone experience and it might just be the superior multitasking handling of Android for a heavy user, who knows. But the Javascript on Android is still a crushing disappointment.

However, given the relatively strong showing of both the Snapdragon 820 and Samsung’s Exynos 8890 in pretty much every other metric, I’m starting to think it’s Google and Chrome holding back Javascript on Android more than the hardware now. Obviously Apple’s hardware is still superior and probably will remain so for the foreseeable future due to complete vertical integration, but from other metrics it shouldn’t be THIS much better.

4 Likes

Some of it was definitely Chrome optimization. Compare December 2015 early test 820 benches…


With the final Samsung s7 release on Chrome 4 months later:


Clearly much faster and anandtech said Qualcomm specifically cautioned them about this Chrome optimization issue in Dec 2015.

3 Likes

I’ve ended up reading this thread before while wondering why my phone (Galaxy s4) sometimes takes a while to browse on Discourse. I’ve tried some different browsers and that has produced good results (Puffin specifically being surprisingly fast compared to the default browser I had been using).

I’m upgrading to a new phone soon and was poking through benchmarks to see which ones might perform best with Discourse. It’s not a particularly big deal, but it’s the most important QoL feature for me. I’m not sure if this is the right place to ask (I couldn’t find any more relevant threads), but given the improvements made to Discourse on this front I wanted to ask if anyone had any new data on which phones generally would perform best when browsing Discourse? What phone specifications tend to make browsing a better experience?

3 Likes

There are two things to look at

  1. Geekbench single core perf

  2. General JS perf on the dominant browser on the platform

Unfortunately Android is in the doghouse on both at the moment, still, even in 2016. If you want to have a good cry, compare those numbers for iPhone 7, hell even the year old iPhone 6s, and literally anything in Android world. :sob:

The only real good news is Discourse 1.5 switched to vdom rendering on core pages, which was a solid 5x perf increase on Android, and Ember 2.10 looks 2x faster on Android, and we think we will be able to get to that version of Ember in this current 1.7 beta, and thus in the 1.7 release later this year.

11 Likes

In mid-2017, Chrome has finally gotten their act together on Javascript – benchmarks shown below are on a Nexus 6p:

Read the official Google followup as well … all of this went into the Turbofan and Ignition release in Chrome 59.

Combined with the snapdragon 835 we’re finally entering obama-not-bad.gif territory, where a recent Android device (90ms) is at least in the ballpark of an iPhone 6s (60-70ms) when on Chrome latest, currently 59:

15 Likes

Out of interest, which SD 835 device is that from?

My SD 820-based Android 7.1.1 OP3 is only pulling 200/180ms in Chrome 59/Canary 60 on the ember benchmarks and 24.2 on Speedometer, which is even slower than the SD 810-based Nexus 6p.

This isn’t a clean device, but I did clear out all running apps and reboot before running, and got rid of all but 8 tabs in Chrome.

It was a oneplus 5, which I no longer have, I bought it as an off to college gift for my nephew.

1 Like

Impressively, I just tested @zogstrip’s Nexus 6p on emberperf.eviltrout.com, render complex list, 2.11 and I got ~150ms. That’s quite good.

Chrome 55 212 ms
Chrome 58 175 ms
Chrome 61 150ms

That puts it in iPhone 6 territory, which is about where it should be based on the CPU, and I would rate it as solidly “good”. The Nexus 6p is not exactly a new device… we’ll see where Snapdragon 845 takes us. Current rumors say:

The Snapdragon 845 scores 2600+ in GeekBench 4, for single core results.

That’s about 10% faster than iPhone 6s.

11 Likes

For those that are not familiar with why Apple devices show higher performance and why Android device manufacturers appear to be playing catch-up here is a nice video with a little bit of history:


For point of reference I ran some of the benchmarks on my OnePlus 2 (ONE A2003) an August 2015 Qualcomm Snapdragon 810 phone - the same processor as the Nexus 6P mentioned above.
This was a new phone when this thread was started back in 2015, now updated the phone running on a custom Lineage OS build (Android 7.1.2). In a months time a replacement phone will come and I’ll reset the device to whatever the latest manufacturer OS is and run these numbers again.

Browser version “render complex list”, 2.11 Speedometer
Chrome 61 (Stable) 152.27ms 27.2 (±2.4)
Chrome 62 (Beta) 179.04ms 25.4 (±1.4)
Chrome 63 (Canary) 190.00ms 23.7 (±1.7)

Allowing the phone to “cool” between benchmark runs - these are pretty repeatable numbers.

Hopefully this doesn’t indicate that Chrome is slipping backwards any when Chrome 63 finally reaches the general public.

4 Likes

Canary isn’t a good test candidate. Try beta. Canary is too variable. Also you don’t need both runs, the HTML and regular are pretty much the same these days.

3 Likes

I’ve updated the post above to include the Chrome Beta numbers too.

Well that is concerning. I am not on Twitter any more but you might try pinging Benedikt with a link to your post https://twitter.com/bmeurer

5 Likes

Done:

5 Likes

I wanted to check if this repeatable cross another device…

Trying it on a slightly older (from the factory, OEM updates only) “OnePlus One” (A0001) - Qualcomm Snapdragon 801 - Cyanogen OS 13.1.2 - Android 6.0.1 phone - released April 2014.

Browser version “render complex list”, 2.11 Speedometer
Chrome 61 (Stable) 329.27ms 15.70
Chrome 62 (Beta) 435.83ms 14.59
Chrome 63 (Canary) 451.24ms 15.10

Again allowing it to “cool” between runs.

3 Likes

Just had this reply:

EDIT: Additional update:

EDIT #2: Additional update:

11 Likes

I re-ran all the benchmarks…

OnePlus 2 (ONE A2003):

Browser version “render complex list”, 2.11 Speedometer
Chrome 61 (Stable) 147.74ms 26.10 (± 2.50)
Chrome 62 (Beta) 165.26ms 24.80 (± 2.00)
Chrome 64 (Canary) 118.85ms 27.90 (± 0.48)

OnePlus One (A2001):

Browser version “render complex list”, 2.11 Speedometer
Chrome 61 (Stable) 333.12ms 15.82 (± 0.086)
Chrome 62 (Beta) 421.10ms 14.59 (± 0.060)
Chrome 64 (Canary) 285.00ms 16.00 (± 0.084)

Note the following for the latest Chrome 64 (Canary):

  • “render complex list” times going down - yay! :allthethings:
  • “Speedometer” numbers going up - yay! :allthethings:
11 Likes

Very nice results!! All kudos to Benedikt. Be sure to link him to your results via Twitter; I can’t as I am no longer on Twitter.

4 Likes

UPDATE: Reply from Benedikt

6 Likes

Good news! In the last year things have gotten a lot better!

Snapdragon 835, Android / Chrome circa June 2017

Snapdragon 845, Android / Chrome circa June 2018

Note this is Speedometer 1.0 to keep the comparison apples to apples. Between the respectable hardware bump (finally) and major Chrome/Android JS improvements, we’re looking at 2x improvement. Vastly overdue… but I’ll take it!

This is finally iPhone 6s territory which I’d call certainly fast enough for native Discourse performance.

18 Likes

Using the newer more accurate (but lower) Speedometer 2.0 numbers here:

OnePlus 5 — Snapdragon 835 — 33.1
OnePlus 6 — Snapdragon 845 — 49.4
Xiaomi 9 — Snapdragon 855 — 68.5

These are of course quite far from iOS hardware numbers, kind of vaguely iPhone 7-ish. For comparison this iPad Pro gets 137.5 and the iPhone 11 gets around 150.

9 Likes

For comparison indicating that not all Snapdragon 835’s are the same:

Google Pixel 2 XL — Snapdragon 835 — 24.6

Updated to Android 10, full charge, plugged into mains after clean restart and waiting 5 minutes (so the phone isn’t busy starting).

The phone was originally released 23 months ago (Oct. 2017) and discontinued less than 6 months ago (April 2019).

5 Likes

Mine was tested months ago, closer to the time the hardware was originally made available. Ditto for the other model. The Xiaomi was tested yesterday :wink:

3 Likes

LG ThinQ G7 — Snapdragon 845 — 52.6
Chrome 77.0 Android 9, Speedometer 2

Gonna close this out with a bang

The Snapdragon 865 gets around 80-85 here, compared to…

14 Likes

I’ve seen strong improvements on Edge Canary on desktop though. i5-8265U that limits to 75-85 on stable Chrome v80, now hits 110 (+30%, on v84).

It mainly seems to be doing less work, since Intel Power Gadget doesn’t really show drastically different ‘CPU Util%’ (I’m guessing how many instructions could be retired by the execution units)

Not sure how this translates to ARM. Fingers crossed.

3 Likes

Desktop isn’t really a problem, we have massive amounts of perf. Improvements to Android are enormous though because of the weakness of the Qualcomm SoCs! Are you seeing any canary improvements on Android? :thinking:

Honestly iPhone 7 (855) and iPhone 8 / X performance (865) isn’t too bad on the Android side. It’s certainly “enough” from a Discourse perspective. It won’t blow you away but it’s totally competent.

3 Likes

My Meta PWA is running under 64-bits already :tada:

12 Likes

Note that you’re still looking at slightly-better-than-iPhone-7 perf on any new Android device, courtesy of Qualcomm. That is indeed an adequate level of performance for Discourse …

… but it is also … four years behind.

Also, data confirmed with my Xiaomi Mi 9 (Snapdragon 855) Android device, updated to latest everything:

9 Likes

Retesting this device now on Android 11, latest updates - now 22
image


And now the latest high-end phone from Google…
Google Pixel 5 - Snapdragon 765G - 25.5
image

8 Likes

iPhone 12 pro

13 Likes

iPhone 7 plus

5 Likes

Wow. That iPhone 12 Pro outperforms my fairly fancy new desktop with Core™ i7-10700F CPU @ 2.90GHz and a GPU (don’t know if that matters). That’s crazy.

9 Likes

Scores above 140 don’t really matter too much, if that helps :wink: … there’s a reason the graph “maxes out” at 140.

It’s the scores at ~70 and even lower that need some lovin’. And 70 is adequate for Discourse, for sure.

7 Likes

The iPhone 13 pro clocks a amazingly high 240, faster than any system I have ever seen

16 Likes

Retesting the Google Pixel 5 (same device) - 11 months later:

Gone from 25.5 to 31.2.
image

Latest (public) Android 11 OS / Chrome / software updates applied.

9 Likes

iPad Air is also amazing :heart_eyes:

4 Likes

Google Pixel 5 in 2022

With the latest (public) Android 13 OS update out and I thought I would do this again on the same Google Pixel 5 device.

So this time from 31.2 (Android 11) to 35.3 (Android 13).

JavaScript Runtimes / Engines

There has been a little excitement on the JavaScript engine / runtime front - at least from the “competition is good” point of view.

Experimental software Bun has shown itself to be a real performance winner at some benchmarks:
image

This making real desktop JavaScript development productivity enhancements:

So “just maybe” there is a potential for these engine developments / lessons to trickle into Chrome / Android / V8 teams?

“Hopes and dreams”

“Android 42 (Life) OS update - doubles battery life of devices by replacing JavaScript engine.”

5 Likes

Yes, thanks for that update! It’s good to see the software improving, even if the Qualcomm hardware is still quite poor relative to Apple’s hardware. The most recent Apple devices produce around ~300-400 in Speedometer, whereas the most recent Qualcomm hardware is ~100-130. So that’s a near 4x difference, same as it ever was.

On the Chrome side, Sparkplug produced a noticeable bump in JS performance in mid-2021, circa Chrome 91.

At least 50 is decent, and 100 is “speedy enough”, so we are past the threshold of acceptable Discourse performance for Android… most of the time. There is a lot of old Android stuff out there.

7 Likes

Google Pixel 6a

(Android 13)

Unbelievable value. A snappy, well built phone, that is very comfortable in the hand, has great battery life and clearly fast enough for Discourse.

Really puts a big question mark over Apple prices imho!

9 Likes

For reference an Apple Watch scores 20 on Speedometer these days.

Not a ton has changed since 2019, except hopefully more people have 855 (late 2018) or better hardware.

8 Likes

This feels like a big step up:

14 Pro Max

(It’s basically a 13 Pro Max PLUS a Pixel 6a :rofl:)

7 Likes

But my Pixel 6a cost ~$350 :stuck_out_tongue_winking_eye: (Does cost of 13 Pro Max + $350 get you a 14 Pro Max? :wink: )

And you simply don’t need this level of Javascript performance for a Discourse client (nice as it is).

And this is my issue with Apple.

Their “value” SE phone (+ $100) isn’t really good value at all. It is way behind on aesthetic, features and screen size. Heck my little Pixel has no notch, slim bezels, fast charging and a USB C port! :smoking:

Maybe Apple will get away with this for now, but wallets are tightening, they may have to respond.

I just can’t believe they’ve ‘normalised’ the $1k+ phone! :exploding_head:

5 Likes

Except for the USB-C port, where Lightning is definitely an annoyance – if bang for the buck is the primary criteria, you can buy an iPhone that’s several generations old (even all the way back to the iPhone 11) and it still blows every Android device ever made (!) out of the water in terms of performance. Used iPhones are your best value on a bang for the buck basis… it’s not even close.

Plus, Apple themselves continue to sell the 12 and 13 at lower prices as their “value” offerings, and you don’t give up much other than incremental camera improvements.

It’s just such an awful shame that Android effectively only has Qualcomm for SOCs. I’m very glad Google has finally woken up and is making their own SOCs at last, but … we’ll see. Samsung’s own SOCs have been marginal at best:

8 Likes

If you can find a nice boxed 12 that is indeed a great alternative.

4 Likes

Pretty much, yes. Looking around I can see the 13 Pro Max going for ~$599 new, which leaves me $50 for a snazzy case.

6 Likes

What about MediaTek? They stepped up their game recently.

4 Likes

Hopefully! Anything that brings meaningful competition to the Android SoCs space is quite welcome.

4 Likes

Pixel 6a was on sale for £299 this week (no doubt similar reductions in other territories too?). An absolute STEAL! (and runs Discourse client beautifully). Incredibly fast to charge and excellent battery life.

3 Likes

Nice for you :wink: Here in Finland 6a costs 399 € and 5G-version is 530 € — except it is for suckers because operators aren’t supporting e-sim neither 5G of 6a (otherwise yes, and widely) :rofl:

Without one political decision I would order one from UK right away, but now… no.

4 Likes

Wow, a Brexit dividend?! :rofl:. We only have one version and it is 5G.

2 Likes

Should we say… just one example of many :wink:

(I really-really-really would like to see britons move to at least same direction as like norweigians did — but now we are off topic BIG time…)

4 Likes

A new version of Speedometer is on the horizon.

2 Likes

Where do you live that iPhones are so incredibly cheap?

1 Like