This might be a difficult task to find someone prepared to take on this task…
…but perhaps someone on the Chrome team would be prepared to be a “Chromium Perf Sheriff” for a specific test for this issue.
You can see the Chrome Performance Dashboard, this highlights regressions and improvements, this even highlights the specific (range of) code that caused the improvement or regression.
Higher visibility of the performance issue would be beneficial and the automated testing across standardised Google Android devices.
It’s not just about seeing improvements it’s also about ensuring there are not regressions and things don’t get worse.
Not quite up with the Apple A9, but a hell of a lot closer than anything we’ve seen so far. Maybe there is hope for Ember/Js frameworks on Android after all.
true, but step 1 is getting the single core performance up. There’s still a lot of work to do on the software side but hopefully this starts a trend of Android chip makers keep the single core performance up and trying to get to parody with Apple’s chips. From there there’s still plenty of work to do in V8, but hardware needed to be there too.
Half of the puzzle is getting better, at least. Now it’s Google’s turn.
… Is there such a thing as a trust-worthy bug / feature implementation bounty programme that could be used to target this V8 / Chrome / Chromium issue?
There would need to be some quite specific requirements.
I would throw £20 (~$30 USD) in personally without even thinking about it…
… the number of man hours lost reading this and other threads and dealing with user complaints must add up.
It would mean more eyeballs on the problem - I just wouldn’t want the additional attention to be too distracting for any Google developers that might be already working on the issue - although I don’t believe there is any commitment of solid time on the issue.
This is basically what Angular 2.0 is doing, and to my knowledge it’s more or less what React does. We’ll see if Ember jumps on that particular bandwagon, Fastboot is still very much a work in progress but who knows what they have planned.
I cant help but laugh at the post you were replying to, suggesting Apple cheated, or buy into ARM. ROFL. Then i saw he got 6 likes… people do believe this shxt. Sigh…
The ARMv8 implementation design started in 2007, picking up speed in late 2009. Announced to the public in late 2011, Apple shipped their first 64Bit Soc with iPhone 5S in late 2013.
This doesn’t mean ARMv8 were unknown to the world before then, like you have said, ARM generally consult and work with their partners on many things; If not everything, that is why they have many implementation and reference design to suit their customer’s needs. And this is not only on hardware, but software as well such as compiler programmers.
Any other company with an ARM architecture license, would likely to have been involved in the design stage. And if they could be bothered to actually work on 64bit design all the way from the start, there is no reason why they cant ship their SoC in 2013 or 2014!
So why were everyone on the market 2 years late on ARMv8 64bit? Well it take 1- 2 years to get an implementation of an Architecture, and then another 6 months to tape out. Even ARM didn’t have their ARMv8 reference design finish when Apple shipped A7. It wasn’t just the competitors were shock, it was a shock to ARM as well. Because NO ONE, absolutely NO ONE think they will need 64bit, not yet, not then, not on a Smartphone.
Qualcomm’s Anand Chandrasekher actually got demoted for saying 64bit is not needed. He was properly right, but Qualcomm was under pressure from their customers / shareholders to offer 64bit chip so he was not allowed to say that. ARM was under intense pressure as well, and to fast track the whole ARMv8 reference design.
I claim no first hand knowledge but I am inclined to agree that ARM were also surprised - if the the input from their customers had been that 64-bits was needed ARM would have moved faster. This illustrates a problem that the ARM ecosystem has - the ecosystem is not a substitute for a single vertically integrated company when it comes to understand the full stack. Perhaps Qualcomm understand phone software well (although I doubt it) - but certainly they didn’t understand the potential of 64-bit, and the other phone chip companies who are ARM licensees have no depth in software.
Another of your points - Apple decided to move early - other could have but didn’t.
Android’s browser not only suck in JS performance, but they suck at CSS animations and have extra oddities to deal with (VH for instance as the top bar collapses). Its baffling how great Chrome is on desktop, and how little they care about it on mobile. Clearly its doable on mobile because iOS does it. That’s why when you compare the top Android tablet to a 2-3 year old iPad on webbrowsing the iPad almost always wins (unless it runs out of memory).
There might be hope for Android’s future. According to the news below (from Ars Technica, which I find much above average clickbait), Google plans to also build its own processors (and hired semiconductors engineers from PA Semi). Hopefully they’d have better single-core performance than Qualcomm’s octacore CPUs:
Like a good internet citizen, I pass this link along without having made a real attempt to fully read it or understand it:
B3 generates code that is as good as LLVM on the benchmarks we tried, and makes WebKit faster overall by reducing the amount of time spent compiling. We’re happy to have enabled the new B3 compiler in the FTL JIT. Having our own low-level compiler backend gives us an immediate boost on some benchmarks. We hope that in the future it will allow us to better tune our compiler infrastructure for the web.
B3 is not yet complete. We still need to finish porting B3 to ARM64. B3 passes all tests, but we haven’t finished optimizing performance on ARM. Once all platforms that used the FTL switch to B3, we plan to remove LLVM support from the FTL JIT.
Not very relevant to Android. Always nice to hear about JavaScript getting faster though
The biggest short-term improvement to Android speeds is coming from Discourse itself, in the form of vdom:
This is going to be huge, and I really hope the official announcement (@eviltrout will do a proper writeup in a blog post) sparks further debate about the state of JS on Android and what’s being done to improve it on every part of the stack – Discourse, Ember, Chrome and Android.
speaking of work on V8 engine, Addy Osmani posted:
New V8 JavaScript performance improvements
Object.keys() is now up to 2x faster
ES6 rest parameters are up to 8-10x faster
Object.assign() ~ as fast as _.assign()
In a bizarre turn of events, the Samsung S7 Exynos (non-US) is quite a bit faster here, about 170ms on complex list… but the Samsung S7 Qualcomm Snapdragon 820 (US) scores a mind-bending 750ms? I have to assume it’s something about Chrome / Android that can’t deal with the new CPU?
Scores can be found by following this tweet:
Good Job Android, you caught up to the iPhone 5s! Well, sometimes!