Hacker News new | past | comments | ask | show | jobs | submit login
Tested: Why the iPad Pro really isn't as fast a laptop (pcworld.com)
71 points by bhauer on Nov 30, 2015 | hide | past | favorite | 75 comments



I'm not sure this is a great article, but I went down the rabbit hole anyway and eventually landed on some pretty cogent thoughts from Linus Torvalds regarding flaws in the Geekbench benchmarking methodology.

http://www.realworldtech.com/forum/?threadid=136526&curposti...


    you see code with tons of indirect function calls, 
    lots of wrapper functions (ie one function massages 
    some arguments a bit and then just calls another 
    function), and no, it's not inlined because it's of
    ten all about interfacing between different library
    levels (webkit, various font rendering libraries, low-
    level drivers for "in memory" vs "on screen" yadda
    yadda)
can anyone speak to these criticisms?

i often write code that starts as this:

    function ae(arr) {
      //do 
      //first 
      //something to arr
      //do
      // second
      //  something to arr
      //do
      // third
      //  something to arr
    return arr
    }
then refactor it as:

    function first(arr) {
      // do
      //  something 
      return arr }
    function sec(arr) {
      // do
      //  something 
      return arr }
    function third(arr) {
      // do
      //  something 
      return arr }

    function ae(arr) {
      arr=first(arr)
      arr=second(arr)
      arr=third(arr)
    return arr
    }
is this what torvalds is referring to?

i thought i had done 'extensive' research on the perf burden of function calls and found them to be negligible so i decided to refactor for the sake of readability

what's best practices in regard to indirect function calls, wrapper functions, and inlining?


It has been known for a while. Even the Geekbench payloads are different for mobile and desktop.

However the benchmarks are irrelevant. Hardware performance means nothing in the face of application support and workflow integration for users. iOS is still an app centric interface. The reason iPad pro cannot replace my laptop is because my workflow is centered around files.


This article seems really half-hearted in places.

The author couldn't read the iMovie requirements enough to run a movie encoding test on the iPad?

Also, the author is against using SHA2 for tests (since it uses optimized CPU instructions on iPad) but has no problem with ZIP which may use any number of SSE instructions on Intel?


Even the title. "as fast a laptop"


I had trouble making it through this wall of text that's not getting to the point. Does the article in the end actually explain why the iPad is slower?


The graphs speak for themselves, you don't have to read any of it as the author only explains the number of graphs which are simple and readable.


i could be wrong, but i came to the same conclusion in a different way as i first heard the statement made during the announce about the power being comparable with laptops.

as soon as he said whatever high percentage of laptops it was i was thinking:

"clever. people who care enough to watch this are likely to care enough about hardware to have high power laptops - as apple fanboys some macbook or other. they probably don't even realise what kind of laptops people buy in large volume and how little horsepower they have - nor what kind of proportion of the wider market macbooks actually make up - iirc around 1 in 20, which is probably enough to exclude most of them from '90% of laptops bought' over any given period, because they are clustered up near the top end of the range."

i guessed the performance might be similar to last years bottom end macbook.

there is also the programmer inside of me who is skeptical from experience working on low-level optimisations for x86/64 and arm (various 5/6/7) code on various platforms...

apple are genuises at marketting imo. the cult, the fanboys, the elitism, the beauty... all of it culminates in making it difficult for people to think objectively about their products. so i'll not be surprised if the idea of the iPad pro being a laptop killer doesn't die - and maybe even comes into fruition. it doesn't actually have to outperform anything if people believe that it does...


You know what reminds me of? A handful of years back various people asserted that developer time was more valuable than computer time and the joy of using tools that might be ten or twenty times slower or even slower than that with quick dev times was worth it. Hardware is cheap, good developers are not, do what's fun. There is both truth and non-truth in that. iPads and especially the pro are the hardware device version of that assertion. Nobody seriously would say they'd outperform a stout Intel core chip, but they are amazingly close and when you look at all the people that were/are still living on XP I think the benchmarks miss the point. It's not the performance that will be the difference, it's the controlled ecosystem and walled garden of the device that does.

Honestly, it looks like performance is in the realm of becoming a nonissue and I'm stunned at the app selection. The weak thing to me is actual data management, but that's a legacy emotion I have as a data pack rat. Laptop killer for technical pros? Not even close but a laptop killer for the masses? I'd say it's capable and just a matter of market perception.


Laptop killer for the masses? At this rate, future programmers, unless they themselves are the children of devs, won't have general purpose computers to experiment on growing up.


i dunno my experience with users in the real world is that software is slow, as slow as it always was and they are frustrated with it now as much as ever.

i'm not saying performance won't go away eventually, but we are not there yet.


Linus's comments make a lot of sense but the fact of the matter is this: a number is a number is a number. If I run the same "flawed" benchmark suite on last year's iPhone 5, it will tell me that, yes, its CPU is indeed slower than that of a 6S. The tests might not holistically represent real-world usage but they output just enough data to tell me quantitatively how much better one phone performs over another in a vacuum.

The "real" benchmark; actually running automation on a cross section of phones against a few popular apps, would probably be a more useful test. But Anandtrch and their cohorts don't seem to agree.


i wasn't really referring to linus's comments although he is pretty much spot on, i hate to say (i have no great love for him XD).

you are right that numbers are numbers, but that says nothing about what they mean or measure. i can write a program that has benchmark in its name and just pulls any old number out, like how many millseconds since the last hour ticked over and give that back as a number. you will get meaningless results.

the argument the author of the geek bench software makes sounds kind of valid but there is something i learned many years ago about optimisation and performance. measurement trumps everything. given that the futuremark physics demo uses bullet physics, complete with its own tight inner loops is a great example of a measurement showing that this benchmark does not correlate with real world uses. the large number of other benchmarks that agree are even more data to support that argument.

that being said his argument is technically baseless afaik too. no app, not even a simple renderer of a blank screen will spend all of its time inside of a short tight innermost loop - it will spend its time bouncing between multiple of them, doing a lot of waiting and having its time stolen at any given point by the OS. i'd be curious to know how he thinks these behaviours will go away in the future? the i-cache setup and L1 cache sizes are not enough for even some quite simple loops even if they are 8-way associative and bristling with all the latest features - its often quite trivial to construct pathologically bad cases using common programming practices - e.g. writing your code using objective-c or swift there is a lot of scope for cache missing and pollution. i can go into excruciating detail about this if you are interested and provide repeatable experiments to back my claims, i've done plenty of work on these things writing and optimising code... but this is long already. :)


Sure, measuring one iPhone against the next makes some sense with synthetic browser benchmarks, but I think where the hype breaks down is in using that browser benchmark to say that the iPad Pro can squash any other portable device out there. At the very least these benchmark results indicate that it's (surprise!) more of a mixed bag than has been reported.

Regarding the need for a "real" benchmark, from the article:

"While Geek Bench 3 attempts to create what its makers think is an accurate measure of CPU performance using seconds-long “real world” algorithms, BAPCo’s approach is actually more “real world.” BAPCo’s consortium of mostly hardware makers set out to create workloads across all the different platforms that would simulate what a person does, such as actually editing a photo with HDR, browsing the web, or sending email."

The author goes on to concede that they have custom apps for each platform to accomplish this task, but it seems that the TabletMark developers are aware of the exact issue you raise.


"The benchmark has two performance modules, which give you an idea of how fast the device would be in web browsing and email. The result for iPad Pro is tepid, with performance just beating the Nexus 9 and its Tegra K1. "

OK then... just how 'slow' are these tablets at browsing?

Considering my ipad mini 1 isnt 'slow' at web browsing, does this matter? I've never even noticed the speed being an issue.

This article basically amounts to "those benchmarks are stupid! look at these benchmarks!!"


There are some really odd results in his benchmarks. For example, he has a Dell Venue 11 Pro with a Core-M beating a Surface Pro 4 with a Core-i5 on some video tests. It is unlikely that a Core-i5 loses out to a Core-M for anything.

He has problems with video testing in general. It seems he couldn't figure out how to test 4K video editing on the iPad Pro. For reference, 9to5 Mac got it to work with some bugginess.


It does not surprise me that a Broadwell CoreM can have faster graphics than a Haswell Core i5. There was significantly more die area allotted to graphics in the Broadwell generation. It's also my understanding that significant effort was spent improving performance/watt in low-power scenarios (sorry, don't have a source for this; I heard it at work).

Take a look at the difference in Die area:

Haswell: http://images.anandtech.com/doci/7003/Screen%20Shot%202013-0...

Broadwell CoreM: http://www.extremetech.com/wp-content/uploads/2014/09/intel-...


don't forget the graphics hardware and memory buses. these can have enormous impact on the performance of anything that gets rendered - even ignoring GPUs, if it is pure CPU rendering.

i'm not saying he's not gotten something wrong, just that your reasoning is flawed based on my experience working with these things. measurement should always be trusted over 3rd party numbers imo - but only if you make the measurement yourself.


The Core-M has more execution units (24) than the i5 (20), however, is clocked slower -- 1100 mhz vs 950 mhz. So, depending on how the graphics pipeline is being used, it's rather conceivable that certain tests will be faster on the lower speed processor. Especially with how intel's playing with what sort of graphics unit goes with what chip, counterintuitive results can and will happen.


    > Executive Editor, PCWorld
Next up, the editor of Oil&Gas Weekly on why Tesla cars get such poor performance.


not even an argument, but had me in stitches. :)

bravo


It really doesn't matter whether your computer is that fast. For heavy (scientific) number crunching you probably want to use a cloud provider or at least a dedicated machine. For pretty much anything else a two year old phone has plenty of power and it's just a matter of whether the proprietary OS that comes with it lets you run the software you want to run.


I disagree. You're painting modern computing as either hard number crunching or the most basic of browsing. Modern phones can barely handle WebGL, are abysmal at multitasking, and definitely aren't able to handle editing source code of the magnitude that I deal with daily at work. Then there's gaming, video editing, large scale image editing (read: Photoshop/Sketch), and many more. There are use cases for the in-between, and I'd argue they're a larger market segment than the two tails combined.


This is absolutely correct. A non-technical friend's circa-2010 15" MBP with 8GB and hybrid Fusion drive (custom, by me) runs at full memory usage and high cpu utilization and is quite sluggish from my perspective. The reason? 150+ tabs open in Safari.

I'm not certain said friend is aware that many tabs are open or even cares, but web browsing is quite cpu and memory intensive, and can cripple even fast machines.


browsing the web used to be "viewing HTML documents", now it's more like a virtual machine being used to host applications.


Has {s,}he tried other browsers? I vaguely remember that in Firefox or Chrome, memory for a tab is kept together (tabs don't stomp on each other's pages) so the OS will naturally page it out when you haven't looked at that tab in a while.

This might not be true but I thought I'd make the suggestion.


Firefox remembers your tab icons and names between sessions but won't load the pages unless you interact with that tab. Most folks with 150 tabs open never look at more than a handful of them.

Note that pinned tabs will always be loaded as the assumption is you're using them for chat or something similar which needs to be loaded each time (messenger.com, hangouts.google.com, etc).


Javascript VM performance is what makes a computer seem fast these days. People talk about being able to use a C2D still or even a Pentium 4, they'll go on and on about how a 8-10 year old machine is still usable.

But even a not-quite 8 year old Penryn system with 4GB of RAM and an SSD is going to chug and chug on today's websites. The same site is pretty decent on a recent i3/i5 but it's painful to use those older C2D systems these days.


Usually the time spent in the rest of the browser engine outweighs the time spent in JS computation. Here's a fun experiment: open Firefox and turn off the JIT in about:config. See how much of a difference it makes in your daily browsing.

(NB: I'm not claiming JS performance doesn't matter--merely that it's only one part of the huge landscape of overall browser performance.)


> But even a not-quite 8 year old Penryn system with 4GB of RAM and an SSD is going to chug and chug on today's websites. The same site is pretty decent on a recent i3/i5 but it's painful to use those older C2D systems these days.

A modern Core i3 is approximately twice as fast as Core 2 Duo per clock per thread. You can certainly notice the difference, but there aren't a lot of things you can reasonably do with one but not the other.

The real problem is a lot of today's websites are so miserable they'll even choke the Core i3.


That's more of a fault of webgl than the performance of the hardware.


The point is that there are a lot of people in the general (non-scientific) public for whom performance matters. Whose "fault" it is that performance matters for them is irrelevant.


Let's not blame WebGL for what really is the fault of the entire web stack. Decades of layering got us to this point.


Well yes, but webgl is just another layer. So it is also complicit.


I honestly don't know what you mean by saying mobile phones are abysmal at multitasking (multi threading? Multiple processes? Drawing Multiple UIs?). Mine doesn't seem to have a problem with multi threading or multiple processes and the only reason I don't use it more is because the UI is horrible and I can't replace it. Also if your source files are long enough to slow your editor down you should probably start factoring stuff out (long files are hard to read), but I've never heard of or seen that actually happening IRL.


You're probably being downvoted because you're nitpicking language and missing the entire point of what the parent comment is saying, which is that there are more types of compute-intense workloads than raw "number crunching" and these workloads cannot or should not be offloaded to the cloud.


I suppose there's compiling but I've built kernels and other large things on my phone and it wasn't noticeably worse than my laptop.


@swiley so all YOU do is 1) something on your phone, 2) compile something to send to 3) a cloud super computer...

And you completely ignore all the other reasons for power given: Photoshop, Video Editing, Gaming (not Farmville), CAD, etc... all using various shades of gray between "Internet browsing" and "Super Computer Number Crunching"

That willful? ignorance is why you are getting downvoted into oblivion.


The only things you can think of related to software development are compiling small applications and opening single text files in editors? What about an IDE that does partial compilation/indexing every time you change a line? What about searching through large codebases? Etc.


Apple's fortune started to change when they started building devices centered around the user experience. Focusing on performance is a dead end for hardware suppliers as there'll always be another company out there who can compile the same components for a cheaper price.

The iPad pro's customer target was focused on the creative type, hence their top notch stylus response. They're out performing wacom in many areas so that alone grants them a customer market.


Really? What about:

1) Games with graphics. 2) Code compiling 3) Video rendering 4) Running a lot of things at once

I could go on and on. There are a TON of computing tasks that go between "need a dedicated, specialized machine" and "your phone can do it"


>It really doesn't matter whether your computer is that fast. For heavy (scientific) number crunching you probably want to use a cloud provider or at least a dedicated machine.

A lot of things we want to do IS "heavy (scientific) number crunching".

Complex 3D Gaming. 4K video editing. DAW and professional audio DSP. Live video filters. OCR. And so on...


Cannot emphasize enough how important it is nowadays for any serious work to have bigger RAM. For a machine with 128GiB memory, you can reside a lot of pre-computed results comfortably in RAM, and substantially speed up your computations.


Careful - "serious work" sounds like "true Scotsman". I think you're right that RAM will limit an iOS device's ability to deal with huge chunks of data, but to borrow a metaphor, that's increasingly a truck's task.

I was going to use the example of large graphics files needing a lot of RAM, but as the design field has gone from print to web, the resolution demands are much less, too. Designers using desktop computers to design print publications are a dying breed.


But graphics aren't tge only thing that require lots of memory. I can crash my 4th gen iPad with just opening one spreadsheet.


An odd conclusion when it's beating many laptops.

Also, an i3-4005U Processor (3M Cache, 1.70 GHz) is fairly topical of a new baseline laptop.


A $1k ARM platform beating a $300 x86 platform isn't interesting though. If things are close in price the difference isn't material but 3x the cost means comparing performance is almost meaningless, except to say "at least its better than".


Also beating at what exactly? If I loaded a page on the iPad Pro will I notice it being substantially quicker and interactive than if I did it on the x86? Synthetic benchmarks claiming this is faster than that are just that - they are used as a marketing tool not as anything that bears any relation to reality.


Just for kicks I ran this on a few devices http://browserbench.org/JetStream/

An iphone 6 gets ~65

iphone 6s gets ~120

ipad pro ~130

new Macbook Pro (safari, haswell i7) ~210

Not scientific, but last years phone was half as powerful, in two years the 7s will also have a jump in performance. I don't think an intel system will have anywhere near as large as a jump since we're bumping against the limit of how fast a single core can be.


Interesting article. I am still waiting to get my hands on an iPad Pro to compare it to the Surface Book clipboard. I am primarily interested in comparing the drawing performance but there are lots of ways you can make a more dedicated system perform its intended task better than a more general system. When I bought the Surface book I understood that I was paying a premium for that generality.


I've tried both quite extensively. FWIW, iPad Pro _feels_ faster and more fluid. Still not the right device for me, but I had no complaints wrt performance. I did not buy iPad Pro though: not the right device for what I do. I did buy the Surface Book, but took it back to the store a couple of days later. It was bluescreening a couple of times a day seemingly due to driver issues. Its UI performance was choppy as well, particularly scrolling non-WinRT apps such as e.g. Chrome or Firefox. This was basically the top of the line Core i7 model, so I expect that things are even worse with the lower end models.


Firefox and Chrome are frankly not good at scrolling smoothly in step with fingers. I use FF on my desktop and phone, but run IE on my Surface because of this. Chrome is tolerable (smooth, but the momentum is wrong). Firefox sucks.

Haven't checked in on the Mac versions lately, but I remember the same experience in FF there a couple of years ago. It feels like scrolling was designed entirely around moving in 3-line chunks per click of the mouse wheel, with some smoothing done for longer continuous spins. Then they tried to bolt incrementless trackpad scrolling on top of that and it went really poorly.


My primary machine is a MacBook Pro, and it has no problems scrolling Firefox or Chrome. Neither does my dual boot Win10/Linux workstation downstairs. I think choppy scrolling is mostly a trackpad issue in this case, TBH. The settings aren't quite right. Disabling pinch zoom gesture improves things a bit, but for $2900 I expect that I wouldn't have to resort to hacks to get the core functionality working.


I have a Surface Pro 3 and I confirm what wlesieutre said. Scrolling with the touchpad on the Typecover is completely smooth in Firefox, but for some reason, this isn't smooth when using the touchscreen. I don't know why.


Ah good point, I was referring to the touchscreen more than the touchpad.

Firefox scrolls smoothly on the touchpad, though it definitely has perceptible lag compared to IE.


I hope Firefox does something for the Windows 10 integration :/ I would like to use it in tablet mode, but the keyboard almost never pops up when in a text field, scrolling with the touchscreen is choppy, it is not really optimized for tablets. But performances are great.


Mine too is the top of the line one, FWIW, the November WIN10 update fixed 90% of the early issues. Generally it doesn't do well when it has a 'reboot required' update queued and its waiting to do it. So far a full reboot which allows the update to process brings it back.

I have not had scrolling issues on Firefox or Edge.


I intend to try another SB 6-9 months down the line. I think the hardware shows a lot of promise once they iron out the kinks.


So you have your lies, your damned lies, your statistics and your benchmarks.


The user interaction isn't as fast because its a touch screen device mixed with an external keyboard in a device where the OS is still not optimized to be used as a laptop. The OS is meant to be used as a simplified touch device. For many reasons, but one very obvious one is file management.


The thing is... is actually killing the pc, it doesn't matter what anyone does, it's the king... for the people who check twitter... "writes" and post to facebook... Disclaimer... it's ironic, I'm actually very happy to see this happening, why it's the mistery, could it be apple shinning aura fading away? less money to throw to make it incredible? or just 6 or 7 years is the time to get someone to post some kind of results like this? I tell people that pc is the most generic and more powerful way of computing we have, when in doubt open vmware, launch macosx, inside macosx run the ipad emulator, inside of it the browser test and see if you can replicate on the ipad.


Now... yes it flawed for the reason there is no way to test the whole system in the same way because every system is different... Or said in other way all of them are different and serve different purpose... hence no way one killing the other.


it is not a threat to x86 laptop vendors, which is the vast majority and OEM driven

what people need to realize is that apple is not becoming a CPU vendor threatening intel, but it has reached a point where next macbook air can be a A9X or AXX, and macbook pro will not be too far away in future.

interesting part would be these chips might be manufactured by Intel and not Samsung or TSMC


[deleted]


I don't think this is a fair comparison to make. The same publication also publishes MacWorld. And, well, Apple and numerous followers -have- claimed or hinted at that, using some choice benchmarks. I don't think it should inherently imply bias.


Dude, that's an ad


I upvoted the GP because I think it's important for content publishers to disclose who is advertising on their sites.

I'm glad to know Intel advertises there. It doesn't mean the article is wrong, but it's a data point.

Edit after downvotes: First, I use an adblocker, so I wouldn't have seen the ad. So I treat it like a potential hidden conflict-of-interest.

Second, I think media sites should make explicit as much as possible--who advertises for them, and what influence the companies have on the media site. I'm sure they'll claim zero. Still I want it in writing.

I don't see any incentive for media sites to do that. But it'd be nice.


Isn't the whole point of advertising to disclose it? If Intel was advertising but you couldn't tell, then Intel is getting ripped off.


I'm afraid of Intel's relationship with XYZ media site affecting the articles in XYZ. I get your point though. Reading myself again, I see that I sound silly.


You have a good point too. I was being mildly facetious. Nothing guarantees you'll see the exact ads that represent a conflict of interest while you read a given article, after all. An explicit disclosure would be nice. Not only to ensure you know the conflict of interest is there, but to show that the publisher understands that it exists too.


I get the sentiment. But for a computer oriented magazine, having advertisers in the computing industry is pretty much par for the course, no?


Absolutely. Unfortunately, as a result I take tech journalism as a whole as seriously as I do auto journalism.


"...disclose who is advertising..."

Because the ads themselves don't? If you don't know who's advertising, then it's not a very effective ad.


If you aren't given a prompt that gets you to act in the way the advertising wants, its not a very effective ads. Lots of ads are better at doing that if you don't know who wants you to take the action at issue -- its particularly true of political ads, which is why there are attempts to regulate disclosure for those, and attempts to obscure the actual interests represented as much as possible while complying with the regulations.

In any case, publications disclosing advertisers in content is an issue because the advertisement may be remote from the particular content, but the relationship with the advertiser may still shape the content. This remains an issue even where viewing the actual advertisements from the advertiser would reveal the identity of the advertiser.


Well learned a new acronym today. I assume L1 I$ means Layer 1 pipeline cache?


I$ = instruction cache




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: