- For the big core, L2$ is a whopping 128 instances 6MB per core/thread, 8MB at 64KB/inst.
- Little core L2$ is 32 instances, 1.5MB per core/thread, 2MB at 64KB/inst
- A12 GPU uses memory compression!
- A12 Big has 2.38 GHz base clock and 2.5 GHz 1 core boost
- A12 Little has 1.538 GHz all core, and 1.562 GHz 2 or 3 core boost, and 1.587Ghz 1 core boost
- A11 and A12 have a 7-wide decode (up from 6 on the A10) and 6Int ALU (up from 4 on the A10)
- Apple’s microarchitecture seems to far surpass anything else in terms of width, including desktop CPUs
- SPECint/fp Numbers show that it’s got 2x the speed of and any all other mobile SoC’s. 3x perf/watt if you normalize speed / power consumption.
- SPECint/fp numbers also show that the A12 is faster than a skylake server cpu (in core-for-core IPC). Not a perfect comparison, but far better than Geekbench
The A12 CPU Core is nearly if not exactly the same as A11. Only difference is clock speed, cache and node improvement. Think of A12 as a optimisation / Tock improvement. ( Anandtech didn't do deep dive on A11 last year due to shortage of staff )
So basically Apple's A11 has already reached Skylake like IPC since last year.
Which makes me wonder why Apple is still paying Intel substantial amount of premium for x86 on Mac? There is at least $100 BOM cost saving, translate to $200 in RSP. In an ideal world, iPad would have had 8 years of iteration and eating away PC market shares. It seems the vision of iPad died along with Steve Jobs 7 years ago today.
The iPad has so much potential yet the best effort they gave it was an average stylus and a clumsy keyboard cover. Now that the Surface has not only caught up but developed its unique forte the iPad has even less opportunity to innovate and lead.
Apple CPU Architecture excellence is nothing short of amazing. iPhone XS A12 processors are 2x or 3x the speed of a Snapdragon 845 (Pixel 3) and have lower power consumption. For these benchmarks they're approaching consumer desktop CPU performance, which typically draw 40-50 watts, using only 4 watts in the A12. https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-re... …
On the one hand I'm excited about Oculus Quest's promise for untethered VR. On the other, it's built on the Snapdragon 835 (last year's flagship to keep costs down) and I can't help but wonder what they could do if they had Apple's hardware instead.
Of course Apple's only publicly interested AR stuff with their phone-based ARKit, but media and gaming are huge markets for them. I don't think an Apple VR headset is out of the question, and it would 100% be an untethered device like the Quest is.
I'm not at all excited by Quest for reasons of performance. I just want a high DPI, wide FOV unit for use with my high end PC... so I'm holding off for the foreseeable future.
Crazy how far ahead the iPhone is over every single Android phone. I moved over years ago because of the horrible hardware, lower-quality apps, and lack of support. Seems like this trend will continue.
The processor is one of the only thing Apple has a clear lead in over Android flagships. (Others being customer support and longevity)
Android hardware and software are fully stable now. Android phones have better cameras and match apple in most other hardware metrics (display, speakers, antennas, build, touch response)
Software wise, it comes down to preferences. Both have a few things they do well, and others they stumble on. iOS12 is from what I have heard, smoother as butter. However, top flagships from Samsung, Google, OnePlus all run Android without any stutter at stable 60fps too.
All of my daily apps/usecases are at parity with their iOS. I don't know if the app ecosystem complaints are valid in 2018. (Apart from snap chat, because they have some wierd hate for Android)
I can respect your choice of wanting an Apple device. Especially if you are invested in their hardware and software ecosystem. But to someone who isn't, the iPhone does offer enough to switch from Android.
> Android phones have better cameras and match apple in most other hardware metrics
I am sure there is an Android phone out there which has got a better camera, another one that has got better speakers, yet another one with better display and yet another which had got a better build than iPhone. The thing that most people forget is that almost all Android phones have some USPs and in exchange they compromise a lot on something else. Why do you thing the entire Android ecosystem should be treated as a single entity against the iPhone? Show me an Android that does every single thing better (or on par) with iPhone.
My experience has been that iPhones are usually not the absolute best at pretty much anything, but they are usually right up there at the top. Maybe it doesn’t have the absolute best camera, but it certainly has one of the best, and so on. And let’s not even compare the longevity of devices.
And before someone talks about prices, Androids that compete in the same arena as the iPhone cost about the same.
Then there is the app ecosystem. Sure, you have all the big names available on Android. But there are lot of smaller developers with great apps which are missing from Android — simply because of the crappy ecosystem which takes a village to build and support apps for. There are no apps which come close to the likes of OmniFocus, Things etc. And while Google does come up with cutting edge tech like AR, a lot of it is not adopted immediately — if ever — by the third party developers who have to wait for the shiny new version of Android to gain some momentum — something that takes forever in Android land.
So no, it is not just the processor that the iPhone has a clear lead on — it’s the overall experience. I couldn’t care less about the specs as long as the experience is maintained.
Full disclaimer: I am someone who usually carries two phones. This means that given the lack of alternatives, I end up with an Android in addition to my iPhone. And while I like some of the features offered by the individual Android OEMs, I do not consider the ecosystem as a whole to be anywhere near comparable to that of iOS.
> And let’s not even compare the longevity of devices.
Why not? My android phones last multiple years - I've only switched when moving countries and the mobile bands are incompatible. My Nexus 4, from 2012 but running an up-to-date LineageOS, is my current backup phone, and works perfectly. I daresay it works significantly better than my parents similarly aged iPhone 5 (or 5s?), which also has an up-to-date OS, and runs slow as molasses.
But really, the main benefit of Android, for me and most other people I'd guess, is price. The combined purchase price of all 4 of my androids is about equivalent to a single current iPhone flagship (150+300+250+360 Canadian dollars).
I agree with you on general specs - iPhone are pretty much guaranteed to be at (or at least near) the top, for just about every category. Comparing the "average android device" to that will not be favourable. But the android ecosystem does give you a lot more choice - if you stuck to flagship Samsung or equivalent, I think you'd find it to be similarly at-or-near-the-top in most every category. And if, like me, you don't want the best of everything, but rather a decent-at-most-things but at a lower price, Android works better.
Agreed, I personally don't need the newest of the newest in my phone. Even though I used to upgrade every year in the early days of the smartphone I recently bought a mi a2 lite with stock android.
€200.
Yes, it is not the best, but honestly I don't need a ferrari to get to work.
I'm rocking the very solid Nexus 6P. I had to swap out that battery which was a bit fiddly but easy enough and very cheap. I see no need to upgrade in the near-medium future.
I have both a Google Pixel 2 and an Apple iPhone X (my personal and my work phone, respectively) and I seriously prefer the Pixel in every regard, with the minor exception of lacking 3D touch support. But then it doesn't really seem like 3D Touch is used for much of anything on the iPhone (and when it is, it's a hidden cheat-code-like thing that is very undiscoverable).
I even like the shape of my Pixel better. The X's bump is a gimmick at best, a hindrance to app presentation at worst.
That is perfectly fine. It always boils down to what you prefer and what fits your workflow. I have no problem with Android in general. It is the grandstanding claims like Androids are better than iPhones at everything that I object to. It is preferable to have more choices than less. I wish the likes of Windows Phone, Ubuntu Touch and BB 10 had survived.
>>> The processor is one of the only thing Apple has a clear lead in over Android flagships.
This is the only thing I do not give a shit about. A much slower CPU would be just fine. The true advantage is the UX. Just try to disable automatic grammar correction on both iOS and Android and count the number of items you had to go through. It is insane how much crap Android has for every single thing. Designed by engineers for engineers mentality is the root cause here. Apple puts the user in charge how an mobile OS should look like or should behave and this is the biggest differentiating factor between iOS vs other platforms.
But I can't root my iPhone and use it as a Node.js server whilst simultaneously sniffing 802.11 traffic at DefCon. Therefore it's useless to me, and to everyone else who owns a phone as well.
Totally. This is why the 0.000001% of mobile phone users who want to run node.js whilst simultaneously sniffing 802.11 traffic at DefCon use something else.
> Apple puts the user in charge how an mobile OS should look like or should behave and this is the biggest differentiating factor between iOS vs other platforms.
The user in charge? Interesting. I have both an iPhone and an Android handset. It has been this way for years (work is the reason). I believe I will never be able to fully accept the iPhone.
It seems like it wants to make a lot of (often stupid) decisions for me. When I want to connect my phone via cable to transfer some files it is always a hassle. I can't sideload apps that somehow goes against Apple's will. As an example, NewPipe is a blessing on Android if you enjoy YouTube (not available via Google Play). The fact that full phone backups are done via an video/audio player (iTunes) makes zero sense.
I feel trapped and irritated when I use the iPhone. When I use the Android phone I feel like the OS trusts me to make my own decisions.
I should perhaps mention that I chose to run Android without a Google account or Gapps. I've gone from Cyanogen via Copperhead to Lineage. Ironically, Android without Google is the best phone experience I've ever had.
Backups do not require iTunes at all. You can backup and restore to iCloud automatically. I have not used iTunes for backup for a long time. Also if you want to use iTunes you do not need to have the phone plugged into anything. Having iTunes open and have the phone connected to the same network via Wifi is all you need to do to sync.
> Also if you want to use iTunes you do not need to have the phone plugged into anything.
Titanium Backup on Android allows for pretty much seamless updates across ROMs (if you know what you're doing). Try doing the same on iOS... oh, sorry. There's no choice in OS vendors there.
That is the key difference. What if I don’t want to be the sysadmin for my phone? What if I don’t want to root my phone? What if I prefer things to just work out of box?
It's about choice. 2 billion people with Androids aren't sysadmins. The phones work fine out of the box. The parent likes choice, while some people don't miss it. No need for strawman arguments.
Do they? Is there a one touch full device backup solution for Android that doesn’t involve rooting the phone or flashing custom ROM? Is there a way to easily sync my sessages to other devices[1]?
The two billion people do no flash custom ROMs or root their devices. They do not even get sucurity patches on time. But they try to use their device just the way one would use an iPhone — with almost zero customisation. So they end up with a phone which doesn’t do everything it should straight out of the box and is vulnerable for most of its lifetime, for the sake of choices/cusomizability they would never use.
I you are talking about the choice of OEMs, I agree. It is not ideal that Apple is the only iOS vendor[2]. But the benefits, IMO, outweigh the problems.
[1] Messages for web is a recent advancement.
[2] While we are on the subject of multiple OEMs, I’d prefer if OEMs followed the Windows model where they can install a few apps and tweak things a bit, but can’t (do not?) make drastic changes to the OS. I’d prefer to live in a world where I don’t have to worry about whether the Android Phone I am looking at is more vulnerable than the others.
The first part of the comment is useful info. The second is pretty useless. There is zero need for this to turn into vim vs. emacs which is what everyone of these threads seems to turn into lately.
>t seems like it wants to make a lot of (often stupid) decisions for me. When I want to connect my phone via cable to transfer some files it is always a hassle. I can't sideload apps that somehow goes against Apple's will. As an example, NewPipe is a blessing on Android if you enjoy YouTube (not available via Google Play). The fact that full phone backups are done via an video/audio player (iTunes) makes zero sense.
This is what bothers me too. On an iOS device, you have to do everything in an app, and if the app doesn't support what you want to do, it's just not possible. Want to take an mp3 file from the web and play it with iTunes. Not easy. Got some audio books that you want to load in and play in your podcast player? Not easy. Everything has to go through each app, and there's no way to move data from one app to another, unless the apps have specifically setup that hand-off.
> top flagships from Samsung, Google, OnePlus all run Android without any stutter at stable 60fps too
This is true. The problem is that the FPS don't matter so much as latency touch-to-update does. And Apple keeps this latency about 1/2 of what Android can do.
I actually find the touch latency thing to be really misleading. I have one iOS device (and iPad Pro 9.7"). The touch latency is definitely better than my Pixel 2 XL, but not enough for me to ever care about it in the best case for both devices.
The annoyances come when the responsiveness is below the standard for either device (eg, 1/4second to 1/2 second delays). I see some of those on both devices, tbh.
These kinds of speed tests where they just open apps and "measure" how long they take to open are extremely unscientific and aren't good indicators of a phone's performance. There's so many variables they never control for except making sure there's no apps in the multitasking view. They're barely useful for anything other than knowing how many milliseconds you'll save opening an app.
Except opening communication and productivity apps is what people usually do when using their phones, not running CPU benchmarks. That Apple is so far behind and has been for years makes using iOS a laggy experience for anyone used to Android's speed.
Interesting. For reasons entirely separate from speed and benchmarks I left Android 18 months or so ago, and bought an iPhone SE knowing it was "last years tech". Despite having had mainly expensive, well specced, Android phones including a flagship or two I found iOS the most responsive, least laggy experience I've ever encountered on a smart phone.
I was actually quite shocked at just how impressed I was on iOS 10 after Android phones that had cost me twice as much and many decades learning every UI and UX experience will disappoint.
I have the opposite experience, but these are just anecdotes. Every YouTube video comparing non-Samsung Android to iOS seems to agree with my experience.
That's why I said 'everyday tasks'. This wasn't really meant to be scientific or compare heavy workloads, but claiming iPhones have a giant advantage in total is a bit untrue.
> In addition to the models listed above, you can stream Netflix in HD on devices powered by the following chipsets. Check with the manufacturer of your device to confirm hardware specifications:
Typing this on a pixel, "fully stable" is a whopper of an exaggeration. Maybe I have a lemon, but I've had camera crashes, GPS issues, numerous freezes in all sorts of Google apps, and more.
One, not much, but I've had almost every member of the Nexus line, starting at the Nexus One, and none have performed flawlessly. Not this badly, but they've never really approached the level of polish that I've seen on the iPhone. This one is worse to the point that it's making me seriously consider switching to an iPhone. That said, the grass is always greener, and I'm sure I just don't know the flaws on the other side of the fence.
the 5C is now 5 years old. No one should pay anywhere near that much. $750 now buys you an iPhone XR with the one of the same cameras that's on the new XS...
Which is what he was comparing. The iPhone XS camera is worse than top of the line Android cameras, and top of the line iPhones have had worse cameras than top of the line Android phones for years.
The only extant Android phone with superior camera hardware is probably the Huawei P20 Pro which has insane low light performance for a smartphone and makes up for middling software with a large bump in image sensor hardware. The Pixel 2's software is good enough to subjectively push it ahead of all iPhones until the recent XS/XR where I think Smart HDR and an increase to an equal sensor size has just about tied with it.
Maybe, but the iPhone camera has never lagged terribly behind the top of the line Androids. But the same time, the so called “top of the line” Androids had other compromises that iPhone usually did not.
An iPhone is usually a well balanced phone. It may not have the best of anything, but is usually right there at the top. And if you want a great camera so badly that you are willing to compromise on everything else, by all means, buy a dedicated digital camera. Similarly, if you want your phone to be so good at playing music that you are fine with the other compromises, you might want to look into a dedicated music player. Smartphones are meant to to generally purpose jack-of-all-trades sort of devices and the iPhone excels at that. If I want a device specialised for a purpose, I’ll buy the just that — not a smartphone.
Please, define worse. Number of pixels? This doesn't mean better pictures (often worse). Performance? Clearly, no. iOS camera is much faster. ML processing? Clearly iOS is much much better to preprocess your photos. On latest generation they even get several pictures with different focal points and pick a best one.
Sure, their camera is just current generation without inflated specs, but works much better.
Picture quality. Performance is exactly the same — both have zero shutter lag. All reviews so far show XS is worse than a top of the line Android phone camera from last year.
Shutter performance is less important than time to start.
All tests shows different white balance between pixel and xs. This is highly depends on screens that are used on the phones and photos on iPhone can simply adjusted to iPhone screens.
One more plus for Apple: in-person support is fantastic. My battery was going, so I made an appointment at the local store. They noticed my screen had a tiny crack at the lower left corner, so while they did the battery, they replaced the screen.
Cost: $0. For sure I bought this phone 2+ years ago...
> One more plus for Apple: in-person support is fantastic. My battery was going, so I made an appointment at the local store. They noticed my screen had a tiny crack at the lower left corner, so while they did the battery, they replaced the screen.
I'm guessing that you met a really nice Genius, because I'd imagine that screen cracks come under accidental damage and usually don't get covered under warranty.
I wanted to get my 6S's battery replaced under Apple's 2018 battery replacement program, but it has a cracked screen and Apple won't service the battery unless I replace the screen too.
As it happens, the Genius didn’t spot the tiny crack. As he was typing “no screen damage” in to the system, I pointed it out to him.
He said oh, that might make it difficult, because the SE’s battery is accessed via the screen being suction-pulled off. He went out back to check.
When he came back, he said they’d replace the screen as well. I’d like to think this is one of those textbook “honesty pays” situations, but who knows.
I wouldn't. iOS spies even more by default (no way to disable AGPS vs. opt in on Android) and doesn't allow using privacy-enabling apps by default (system-wide ad/tracker blocker, real Firefox, Signal, local maps, etc.) unless you hack your phone. Worse, it doesn't let you develop for your own phone without rebuilding weekly or paying a yearly fee for the privilege.
I use a system-wide ad/tracker blocker (1Blocker X) and Signal. I have a local maps app (maps.me) that I only use when hiking, so I don't know how good it is for general use.
That leaves:
(1)AGPS (which I'm not informed on, I've never thought of turning it off)
(2)real Firefox (aka alternative rendering engine that you can set as the system default)
(Develop for your own phone without rebuilding or paying isn't a security thing, just a convenience thing)
vs. Google mining your data at every opportunity and Google/carriers dropping support for phones as soon as they can get away with it.
> I use a system-wide ad/tracker blocker (1Blocker X)
It only blocks ads in webviews, unlike ad blockers on Android, which block ads and trackers in native apps as well.
> and Signal.
I specifically said you can't set it as your default, which makes it significantly less convenient and therefore much less useful.
> vs. Google mining your data at every opportunity
This happens when you use Google apps on iOS as well. There is no difference.
> Google/carriers dropping support for phones as soon as they can get away with it.
Buy a phone that gets support. When you buy a car, you don't judge all cars by the worst manufacturers. When you buy an Android phone, doing the same is equally nonsensical.
> I'll take iOS.
Whatever floats your boat. At least now you have enough information to make an informed decision.
That's fine. You don't have to use Google apps on Android. Better, you get to choose which apps to use as default on Android, unlike on iOS as I showed earlier. Whether you use Google apps or not, iOS collects more data from you than Android, as I also showed earlier.
I understand that Safari is the system default such that if I click on a link in some app that says "open this in a browser" it will always go to Safari and even if I copy the link and paste it in a preferred browser it is going to use webkit instead of that browsers real renderer...
What are you talking about with other apps? What would it mean to have Signal the default app? When I want to use Signal I open signal, there's no convenience factor. Signal shows up as an option on the share sheet and I could even delete messages.app from the share sheet if I didn't want to use it.
I guess I can theoretically grasp links that open in maps and wanting to be able to change that default. (though I don't care to, since I don't have to send my data to Google) Are there links somewhere that open in "the default messenger app"?
What are the other default apps that I'm not able to change on iOS?
Also what is this about AGPS data going to Apple? I've just seen an assertion by you, nothing else.
Summary:
1)I'll fully give you the WebKit, you care about that, I don't, but I agree it's a thing.
2) You care about Maps, I don't, and I'm not sure it is that much of an issue. I interact with maps 90+% of the time after opening the app so if I want to stop using Apple's Maps it is a minor inconvenience.
3) I don't understand your point on Signal
4)I think there are other "default apps" you are talking about that I don't know about.
5)What are you talking about with Apple collecting AGPS data?
added:
6)Isn't AOSP not really useable as a phone interface... that is the impression I've gotten. How else am I supposed to not "use Google apps on Android"?
> The default on iOS is to not send any telemetry data to Apple.
This is actually the default on Android. I showed a specific case of data acquisition that iOS does that you can't opt out of. On Android, that data acquisition is opt-in.
I'm not aware of any use of differential privacy on the Android platform.
And who cares about AGPS. Your cellular provider who is the only one who has access to AGPS data can already track your location based on cell tower information.
IOS uses positioning based on cell towers and Wi-Fi access points, so if your phone has its location services active Apple can tell that a device with a specific ID is near a certain location. This ID is rotated periodically and not linked to your device or to you but you can’t turn this off.
AGPS means the phone downloads the GPS almanac, the dynamic dataset you need for using GPS, from the internet as opposed to watching the satellites for about 30 minutes to download it. That would be stupid and would not have any positive impact on your privacy.
> Your cellular provider who is the only one who has access to AGPS data
And Apple on iOS.
> I'm not aware of any use of differential privacy on the Android platform.
Android itself doesn't need it. It collects less data than iOS. If you're talking about apps on Android that use differential privacy, there are plenty, and their implementation of differential privacy is better than Apple's. https://www.macobserver.com/analysis/google-apple-differenti...
We were comparing iOS to Android. What the cell phone companies collect affects both equally. AGPS collection makes iOS more invasive than Android, and you can't even opt out.
Agreed, but I'd also argue that 99% of people don't care about what the processor is. It seems even the basic GUI of Android can't do things that iPhone was doing in 2007. (Live resizing of app windows, realtime view blurring, etc)
Even today, the keyboard on Android forces an entire view refresh, awkwardly jerking the entire view. It's a jarring and unfinished experienced IMO.
I was an Android fanatic - I had the very first device, the HTC G1, and about 10 other devices since then.
Eventually I got the Google Nexus 5, an Android flagship. Its updates ceased about a year after I bought it.
I gave up and got an iPhone 6S. Still getting updates to this day, despite being a 3 year old device. The constant software updates and support that Apple offers is clearly superior to what is provided with Android - even when you buy the 'official' Google phone.
The hardware quality of Android phones are pretty good and the phones do not suffer as a result of their processor.
If anything I would however say they suffer because of their bloat software(that is installed onto of Android).
The big thing for me (though there are others) is emulating old games. I realize this is probably not a super high demand use case, but I love playing NES and SNES games on my phone during... down time.
I was an iPhone user for some time, but got fed up with Apple telling me what software I could and could not run (without a jail break / side load.) I'd go back if they free up the app store market a bit. I don't find some of their restrictions reasonable.
This isn't a solution for most, but if you know your way around XCode, you can build https://github.com/libretro/RetroArch for any iOS device without the need for jailbreak. The repo contains working xcode projects (though it still may take a little tinkering...) I even got it running on my appleTV!
If you have never had a paid developer account your provisioning profile is only good for a week. This period used to be 30 days but was downgraded to a week sometime in mid-2016, if my recollection is correct.
Apparently if you have had (or currently have) such an account, the period is longer.
Sure, there are a bunch of ways I could have accomplished what I wanted. You can side load quite a few emulators these days without rooting, but man... just... why do I have to? On android I download the thing and go. Once driod hardware and software got better I dropped iOS and haven't looked back.
Well not OP, but the reasons I use Android instead of Apple are:
1. Price.
2. Availability.
3. Ability to install apps from apks, & any store instead of only their own store.
4. Many features, like double sim, memory card, otg, etc are only in Android.
Agreed, but then there are other reasons for direct apk installing, like not willing to pay Play Store for publishing test apps, orlike tubemate, the apps which are not allowed at store.
You might want to browse https://caniuse.com/ There are so many features of HTML/CSS/JavaScript that Apple has not seen fit to put into Safari (because, of course, doing so could potentially impact their profits from app sales), but those features exist in Chrome and Firefox on Android. Safari on iOS has become the IE6 of the modern web development era and is really holding us back.
Most features you are talking about are not yet Stage 2 supported. Chrome went ahead and built a lot of these features without full TC29 support. Many service worker features are still Stage 0/1. When it hits Stage 2 safari will gain support.
I think the malice attributed to Apple is somewhat earned due to momentum in the PWA space, but is mostly just do to obeying TC29 stages.
Who cares if Firefox or Chrome on iOS support some new feature if everybody is using Safari since it's the default. You can't ignore half of the mobile market when rolling out your website.
>horrible hardware, lower-quality apps, and lack of support
The latest pixel (if you want to use that as a comparison) has better or equivalent specs in almost everything other than CPU power. Lack of support? Well look at how the Nexus 5 still has the latest android version available.
And for me the clincher. Can I install an alternative free OS (like you can install Lineage in the pixel)? If not, then into the trash it goes.
Isn't this just an iteration on last year’s iPhone X?
I always thought android were further ahead in a lot of departments (especially hardware). Mainly because there is a new android phone almost weekly, which sometimes uses new technology that the iPhone did not have available to them at the time.
>Isn't this just an iteration on last year’s iPhone X?
In the way that every iPhone has been an iteration on previous models, but here we are. The main feature this year is photographic quality is difficult lighting conditions, particularly low light and with harsh backlighting. Some of the side by sides with pictures taken with last years model are pretty impressive.
In terms of hardware, it used to be that Android manufacturers could get a new component into a phone maybe 6 months before Apple, but nowadays Apple is developing custom hardware features Android manufacturers have now way to replicate. Things like in house developed health sensors, neural engine CPU co-processors for the image enhancements I mentioned above. The 3D face scanning tech in Face ID.
These aren’t things that can be knocked together in 6 months, it takes years, so we’ve gone from occasionally Apple missing a feature for 6 months and sometimes stealing a 6 months March, to its competitors now being several years behind, if they even have a path to effectively competing at all.
But as ever, it just depends what features matter to you and at what price point.
I don't understand, why did you feel like you had to put down Android in this thread with statement that is unsubstantiated and probably years out of date at least?
If you're happy with your current iPhone (yay!)... why the need for this inflammatory rethoric? Or is it marketing?
Unless you have very specific apps you use (most big ones are the same imo) I'd recommend getting a low end Android phone over a 1400 dollar phone any friggin day.
My $279 phone has taken more beating than all my iProducts combined and still works great, and whenever it breaks I can fix most of it myself (not having to deal with an apple store is always a big plus).
This is like trying to argue that your Toyota Corolla is a superior car when compared to a Mercedes. It's a very narrow utilitarian way of thinking that only appeals to ascetics.
A $250 phone is a high end phone from 2 years ago, and for most people that is more than enough. There are no low end iPhones, and apple is pretty good at making people think they need a Ferrari when all they want is to go from point a to point b in a decent way.
I don’t think you can use that argument yet. I’m not sure we’ve quite reached the tipping point where all phones are so fast, that you need some special use case to notice performance differences.
My iPhone SE is only 2 years old and there are websites that give it trouble still.
As opposed to my 6 year old Air where I can’t tell the difference in performance with my work laptop unless I’m running multiple docker instances or transcoding some video.
Maybe the A12 is the tipping point. 7 years from now, all phones will be at least this fast.
I have a Motorola g6 plus, which, together with uBlock origin browse the web just fine, and I only block the worst offenders.
The only thing I notice is that my tuner (Tunable) starts about the same speed as it does on my friends old iPhone 5. There is some touch input latency, but I'm not so easily bothered.
When it comes to consumer electronics I have come to the conclusion it makes more sense to pay more for the best available now (an iPhone) and holding for longer, rather than try to pinch pennies by buying yesteryear's technology. It's the only way to get acceptable usable life out of them.
Apple has been making hardware with sub par durability for quite some time. The only things they made more durable have all been at the expense of repairability. The whole bendgate thing? Yup, apple actually went against best practice and stopped using proper underfill. The result? The touch-ic chip came off. Who would have guessed that a bendy phone without underfill would have flexion damage? Everybody. Even apple themselves, in official, leaked documents.
Apple has been extremely successful at denying problems up until there is a class action lawsuit. Then they silently release a pretty hostile repair program with the words "a small percentage of iDevices ...".
I used to be the biggest apple fanboy, but after having not one, but two macbooks fail on me was more than I was willing to stand. The last time I had to chose: leave the device to them in an official repair programme and have it wiped OR saving the data on it and void any apple repair programme (they later changed this
policy though). Why? Because some sort of catch-22 where they would not do data recovery on a device with a faulty logic board (onto which they had soldered the SSD!) and the fix was a new logic board with a new SSD with the old logic board and SSD sent away for refurbishment. Think different!
This seems like a super delusional/ill informed comment. The iPhone excels in various benchmarks that don’t matter for the typical user. Otherwise android hardware and software is extremely competitive, often better, at much lower prices. Especially the mid range phones coming out of China. Take a look at the Oneplus 6 and tell me the iPhone XS is 2x better than that phone. Because the price is 2x. I’m a long time Apple user looking to switch soon.
iPhones are worth 2x as much because they last 2x as much.
In iOS 12 the performance of the iPhone 6s was dramatically improved which was released over 3 years ago. And Apple will continue to release software updates for probably another year or two.
How many Android devices are receiving updates 4-5 years after release ?
"What is quite astonishing, is just how close Apple’s A11 and A12 are to current desktop CPUs. I haven’t had the opportunity to run things in a more comparable manner, but taking our server editor, Johan De Gelas’ recent figures from earlier this summer, we see that the A12 outperforms a Skylake CPU."
Apple gets a lot of flack for the high margins on their products, but those high margins allow them to design and create high-cost SoCs. They also essentially have a business guarantee that all of the chips they fabricate will get purchased, at a know price and ROI.
So, on top of the vertical integration of software/hardware advantage, they have diminished risk.
After looking at this closer, what they are comparing is single threaded performance of the Skylake Xeon 8176. At 28 cores and 56 threads, that watts per thread or core value starts to look somewhat comparable, given Apple's low core count (compared to that desktop CPU) and that they might not run the Vortex (2x) and Tempest (4x) cores at the same time(?).
So, I think it's both a matter of comparing Apples to Oranges (ugh, I would use a less confusing analogy if one came to mind) in that the types and usage and number of cores are different, and also just fairly impressive that they can get that much single threaded performance out of the chip.
The craziest thing is how close they get at 3W. That's the part that's staggering. What could Apple do with, say, laptop thermals? Desktop thermals? The mind creeps closer to boggling.
Well, except that hyperthreads are not the same as cores, so it's not quite equivalent. My understanding is that the two hyperthreads for a core share some execution resources, but are able to run in parallel for other resources. That means some workloads won't benefit much at all, others will benefit greatly, and the usual case is somewhere in-between.
Also, I think I've heard people here noting how hyperthreading causes some workloads to go slower (possibly from CPU heat throttling? Dunno) so some people disable it in the BIOS first thing.
So the truth is likely somewhere between 2.95W/thread and 5.90W/core (assuming 165W is even correct, there's comments here noting while AMD and ARM quote max, Intel quotes operating average...).
I'm pretty skeptical of this. 3 W mobile CPU outperforms 165 W desktop CPU? If it was the case, they would have those things in macbooks already, and would boast tens of hours from one charge... If you go to the article that is mentioned in "recent figures", they tested single-core performance. So it was single core of a 28-core CPU (with boost though). It's still impressive, but not unbelievable.
It's a 165 watt desktop CPU running a single-threaded integer benchmark that doesn't use the other 27 cores, doesn't fully utilize the 6 memory channels or 40 PCIe lanes, etc. Also, you're comparing TDP to average power. Skylake won't come close to the TDP running an integer benchmark (it's the AVX units that really suck power).
Amber Lake Y (a 2-core part with 4.2 GHz turbo) should achieve similar performance on a single-threaded integer benchmark like that, and it's a 5W part. Still, that's incredibly impressive on Apple's part.
I'd consider it feasible-- in my testing, my iPhone XS outperforms my mid-2017 MacBook Pro (with i7-7820HQ [1]) in single-core benchmarking with Geekbench 4. Not by much, but it does manage a slight lead: https://browser.geekbench.com/v4/cpu/compare/10006209?baseli...
Granted, this probably wouldn't hold up for a more sustained workload since the fanless iPhone's going to hit its thermal limits much more quickly than the full-size laptop, but it's pretty impressive that they're edging this close to Intel anyways (in a much smaller, much lower-power package). It'll be really interesting to see what happens with the more unconstrained A12X that'll likely go into the iPad Pro this year.
Did you run it on MacBook when it was on battery? Might not get to the max TDP in that case. So it would be like 30W vs 4W, but it's also 14nm vs 7nm.
Anyway though, it's still _very_ impressive from Apple, but much more believable. Looks like they've got very good engineers working on it. I hope they can scale it to the laptop requirements.
I think that one was run on AC, but it was a while back so I can't be 100% sure. I keep meaning to re-run it to make sure, but haven't gotten to it yet.
I also have benchmarks of Haswell-era hardware which the XS is also very competitive against-- including my personal desktop, which the XS beats in single-core and matches for multi-core. But that one's probably not an entirely fair comparison, given that it's older hardware and the machine wasn't intended to be particularly high-end when I built it.
I'd like to see some more comprehensive benchmarking against Intel... I'd be inclined to do some myself, but I don't have a good benchmarking suite that isn't Geekbench. If only the SPEC benchmarks weren't so stinking expensive...
I'm also somewhat skeptical, but it's worth noting that "Skylake" covers a lot of ground[1], and the mobile Skylake processors use 15W (9.5W when in low power processing mode)[2]. Still a matter of multiples, but 5x instead of 55x.
They talk about specific CPU there, Xeon 8176. That's why phrase from the article "we see that the A12 outperforms a Skylake CPU" is very sensationalist.
Ah, I see what's going on. The Skylake numbers are for a single threaded test. Assuming the A12 ones are as well, what's being said (poorly) is that the single threaded performance of Apple's A12 is better than the single threaded performance of Intel's Skylake Xeon 8176.
Of course there's a different number of cores in each, and the performance of multi-core code will likely differ based on technologies (memory architecture, hyperthreading vs real cores, etc).
If it's accurate, it is still fairly impressive (just not as crazy as it first sounds).
Anandtech is using SpecInt/FP numbers to compare Single core IPC between one A12 BIG core, and one Skylake server CPU core.
Per-core power consumption on that Intel chip is 165/28 = 5.89W.
Apple is pushing similar IPC at 3W instead of 5.89W In that context, it's not so unbelievable, but it's still really fucking impressive.
The reason they don't have them in laptops is not because of performance, it's because they don't have an OS. A laptop that only runs iOS is called an iPad and it doesn't have the software I want to run.
I am quite sure that macOS runs quite fine on an A12. Like back in the PowerPC times, where Apple would build OS X always on x86 chips too, they certainly keep testing macOS on ARM hardware. Even if they don't have any imminent plans to release an ARM based Mac, it is a great way of testing your software for portability.
Of course, but without any software. Microsoft has done some interesting stuff in that regard so maybe there is an easier transition than getting all third party developers on board, but it's a tough nut to crack.
Apple have experience of that, and I mean a lot. The first C compiler I used was the cheap student version of Metrowerks that only compiled to 68K, I was on a PowerPC machine at the time. Mac OS X had a Mac OS classic compatibility mode until around the time the Intel processors replaced PowerPC ones, and early Intel machines could run apps compiled for PPC.
Apple is going to have a good ecosystem of ARM software very soon. The whole reason they pushed their Marzipan initiative with the ability to seamlessly port iOS (ARM) apps to macOS (x86) is so that they can also do (iOS >>> macOS) on ARM.
By asking developers to port from iOS apps, they have the side effect of getting developers to implement "fully featured" apps, just with a mobile UI. Adobe has already promised they'll be doing this with photoshop and Illustrator, releasing in 2019.
Others will follow suit. Not only will this strengthen the feature set of iOS apps, but it will strengthen the iPad Pro app ecosystem, and apple will simply link the iOS and Mac App Stores, allowing for "universal" apps.
Marzipan will translate them to x86/64 and later when Apple switches to ARM, Marzipan will simply keep these apps on ARM, and change their UI to match.
If you've been watching Apple closely over the last few years, you'll notice that they're consolidating their API's across macS and iOS. Things like Metal and AppKit and UIKit and so on are beginning to converge. This is all setting the stage for Marzipan.
They announced the "first half" of what they intend during the WWDC 2018 keynote. But the second half will be announced only after they reveal ARM Macs, and then "surprise - all the iOS apps you see now are available here, too. They automagically get UI adaptations for mac!"
Yes, the challenge is the software. Not so much the native Mac software created with XCode. That would probably rebuild at a button press. The elephant in the room are VMs. Many Mac users (for example me) are running x86 based operation systems inside VMs on their Mac. This wouldn't work without a dynamic translation like Rosetta. But as Apple is making their own processors, I would consider it a possiblity that they add some hardware accelerators for a Rosetta style software to their chips.
While I understand running VMs are important for you and many developers, people like us who want to do this have to be a miniscule share of Apple's market. I wouldn't bet on them spending much time on it.
You are right that developers are only a small part by volume. However they are the most important users in an environment, as they the ones creating new applications. Same with other power users, who require a VM. Driving them away from the platform could have a much larger net effect. That Macs got much more popular after the switch to x86 hints at that the compatibility isn't quite unimportant.
So I do think that Apple is probably going to ARM, but they need to be very careful in the steps they are taking and they need to have a really compelling offer so that users are willing to go through the transition.
My point was that if they had this abysmal advantage now, it means they would see it coming several years prior and would work on OS and anything else it needed.
But it's not the point, main point is that I think statement that current A12 is close (to whatever reasonable %) to top desktop chips is simply not true. I didn't see any concrete evidence of it, and everyone just seem to like this idea, that's why it was talked much lately. I'm all for great advantages and all that, but I still must use common sense, 3-4W vs 150+ W - that's too much.
People aren't comparing 3W mobile chips to 150W desktop CPUs. They're comparing like for like in terms of perf/watt. Skylake isn't a particular SKU, it's a micoarchitecture generation, from which Intel can scale the design from low power 10W parts to power hungry server chips.
Apple has successfully changed architectures before, and has been developing an ARM version of Mac OS for a while. I am sure that it doesn't have the same level of refinement, it is might not be that far off.
Legacy isn’t a good explanation because the instruction decoder that turns x86 instructions into microcode is basically a fixed cost and makes up a tiny portion of the silicon. Clearly there is room for innovation in the x86 sphere as well because the AMD Zen architecture was created brand new from scratch and has become very competitive.
The more likely explanation is that there is an exponential difficulty curve in CPU optimization and performance that Apple and their admittedly talented engineers are climbing very quickly.
When an Ax chip hits true desktop performance sooner than later, the limit of physics is going to slow down Apple just like it has Intel and AMD. There isn’t any magic they are going to be able to do that suddenly gives them a 2x IPC gain or huge frequency advantage.
Well my Intel chip needs a fan and my iPhone doesn’t. So it’s not an apples-to-apples comparison (no pun intended, please nobody reply with “I see what you did there”).
The fact that someone can come up with a chip of similar performance to Intel’s isn’t that amazing. The fact that they can do so without needing a fan is.
I agree— though it’s perhaps worth mentioning that a fan on the phones was used for the SPEC2006 benchmarks:
> The performance measurement was run in a synthetic environment (read: bench fan cooling the phones) where we assured thermals wouldn’t be an issue for the 1-2 hours it takes to complete a full suite run.
Still darn impressive, as in the real world you won’t be doing anything with your phone that maxes out the SoC for 2 hours straight.
The performance comparisons are not remotely apples to apples to begin with. All that's been established is that an Ax chip is entering the realm of desktop performance. Under all the workloads you'd benchmark a regular desktop chip on (FPU, AVX, etc etc) the Ax chips are still going to be far behind at the TDP and frequency they run at.
If you could pump up the TDP and frequency of the Ax chips to actual desktop class and throw a heatsink on so you don't get thermally throttled, I am sure Apple's chips would be competitive.
It's an inevitability that we'll see a desktop-class Ax powered Macbook at some point soon-ish. When that happens we can start making legitimate comparisons.
Due to all that legacy the instruction decoder has a lot less opportunity for optimizations.
There may be a physical limit to the size of transistors but the limit of computing performance is not ‘what Intel is doing’. There’s more to performance than instructions per second or clock frequencies. In fact these don’t lead to the same performance at all as ARM is RISC and x64 is CISC.
That RISC and CISC, nobody mentioned until yours. Why are people only caring about Perf/watt or the SoC beating out the ST perf of the Skylake uarch. RISC based processor with that level of cache is always having an advantage due to the Load and Store at register level (esp in this A12 case of having HUGE L$) vs the CISC's more transistor level perf + DRAM to exec instructions, causing higher TDP requirements PLUS the multi instruction execution parallelism capabilities. And entirely different code of AArch64 and x86_64.
Add the GPU and Direct X, and Encode/Decode functionality I know this is not the subject, the ARM processors simply cannot keep up. The author did also mention the limitations of the comparison also as mentioned above the Skylake Xeon is capable of running Hex channel ECC Memory with huge capacity that alone will outpace everything on ARM it's not a fair share of comparison at all. GB is an OS level optimized bench. There's simply no comparison point that exists.
Like I'd say to the author do a 4D render bench like Cinebench then we talk on this. He should at-least have mentioned the uarch part. Also on a side note, Apple underselling the chips ? That's really first, since Apple always over promises. Their latest meltdown on the MacBooks with low quality/specced VRM throttling the CFL machines is a joke.
That x86 support isn't free. Intel's SIMD implementations and new instructions only ever seem to target the high end; I don't think you could use AVX512 at low power. They also guarantee various things about memory ordering that ARM doesn't, and of course the page table, 16-bit boot modes, and everything are super complicated.
I would expect there is some legacy aspect to it, although Intel building the Itanium (no legacy restrictions) just made a box that used more watts per instruction than their 32 bit line and wasn't that much better off. That said, things like the 'half carry' flag, segment pointers, and a large power hungry FPU don't help the Intel story.
I expect the primary difference one will find is that Intel has more "large" cores in their chips (full fledged cores) and the comparable instructions have some "gotchas" (like the flag mention) which make them harder to retire early in the pipeline. Toshiba made a funny comment at one of the Microprocessor forums that for their microprocessors, if you only compare the instructions that both they and Intel implement, the Toshiba processor is better in every way.
What is clearly true though is that compatibility aside, ARM chips are becoming serious contenders in the desktop/laptop space sooner than I expected them too.
But that isn't really the question rather than the answer -- in other profiles the chip could have significantly better heat dissipation solutions and would have dramatically more headroom.
That's not really the question, because turbo boost exists on servers and workstations too. Intel chips will clock to 4.5GHz for a short while before throttling down to their base frequency once the thermal ceiling is hit.
Huuuiuuge caches, and optimisations for using them as efficiently as possible: probably async prefetch, possibly a semblance of maskable cache, very likely reordering for cache missing ops.
Here, the prehistoric nature of x86 actually drags it down because of poor predictability
x86-64 is decently predictable, probably the same as armv8. i386 wouldn't be because it has to spill to memory so much you can barely manage to rename those registers.
> Overall the new A12 Vortex cores and the architectural improvements on the SoC’s memory subsystem give Apple’s new piece of silicon a much higher performance advantage than Apple’s marketing materials promote. The contrast to the best Android SoCs have to offer is extremely stark – both in terms of performance as well as in power efficiency. Apple’s SoCs have better energy efficiency than all recent Android SoCs while having a nearly 2x performance advantage. I wouldn’t be surprised that if we were to normalise for energy used, Apple would have a 3x performance efficiency lead.
Wow indeed. I'm impressed that Apple has managed to create and maintain such an insane lead in ARM performance for such a long period of time.
Does anyone know of more technical reasons for Apple's ARM processors outperforming everyone else's by such a large margin, and for such a long period of time? Seems like there's some fundamental difference in what Apple is doing, and I'd love to read more about it.
One factor called out in the article is the number of instructions that can be carried out simultaneously during a single clock cycle.
>Monsoon (A11) and Vortex (A12) are extremely wide machines – with 6 integer execution pipelines among which two are complex units, two load units and store units, two branch ports, and three FP/vector pipelines this gives an estimated 13 execution ports, far wider than Arm’s upcoming Cortex A76 and also wider than Samsung’s M3. In fact, assuming we're not looking at an atypical shared port situation, Apple’s microarchitecture seems to far surpass anything else in terms of width, including desktop CPUs.
Apple first moved to wide CPU designs with the Cyclone CPU core found in the A7 SOC first used in the iPhone 5s.
>With Cyclone Apple is in a completely different league. As far as I can tell, peak issue width of Cyclone is 6 instructions. That’s at least 2x the width of Swift and Krait, and at best more than 3x the width depending on instruction mix. Limitations on co-issuing FP and integer math have also been lifted as you can run up to four integer adds and two FP adds in parallel. You can also perform up to two loads or stores per clock.
That's part of it, but doesn't fully answer the question. Decoding and issuing 6 instructions per cycle ordinarily is extremely costly in terms of power. And it's usually very hard to keep those execution units busy--it's hard to find six independent instructions to issue every clock cycle. How Apple built a 6-wide CPU within that power envelope, and optimized the compiler to actually use that IPC is the really interesting question.
Lower maximum clock speeds mean you have more FO4s to play with and potentially makes the fan-out issues in wide designs a bit more manageable. Decode I expect to be pretty easy, as long as your branch predictor is on target the power costs just grow linearly with decode width when all your instructions start at nice 32-bit boundaries.
Mostly I'm curious about how complete the bypass network is on their functional units and if execution is clustered like the POWER8. The width doubling in the A series does remind me of the POWER 7 to 8 transition.
Renaming is also apparently a major constraint on design width in many cases but I'm not so familiar with that.
What does "FO4" stand for here? Googling it yields "Fallout 4", which definitely isn't right, and I'm not sure what other keywords to tack on to get the right result.
Sorry, that a "fan-out of 4" which. Traditionally you look at circuit timings in terms of how long it takes one transistor to switch 4 other transistors of equal size. Wire capacitance is a lot more important these days so it's not necessarily the best metric anymore but it's still used. The fewer FO4s of delay you have in a pipeline stage the faster you can clock a chip. The fewer FO4s in your longest pipeline stage the faster you can clock your chip, though there's also a non-linear dependence on voltage. Because of that non-linearity I'd still expect a lower-clocked chip to have more complex pipeline stages. And you can only increase your speed by slimming down stages so much. The overhead of latches and accounting for clock jitter generally add 4 FO4s beyond the useful logic you accomplish in a pipeline stage.
Excellent explanation! Quick follow-up question: Why FO4 specifically, and not some other number/metric? Was (is?) that a particularly common structure in CPUs?
One respect in which I can imagine the x86 ISA being a real problem is in decode bandwidth. To issue 6 x86 instructions per cycle, either the front end needs to decode 6 per cycle, or it needs to cache decoded instructions. And x86 can’t be decoded in parallel without massive complexity because the instructions are variable length, and even determining the length requires mostly decoding the instruction.
It's true that decoding x86 is harder, but Sandy Bridge+ get most instructions from a uop cache, which delivers 4 fixed-length uops per cycle. You could make that 6 wide, but Intel doesn't because they wouldn't be able to fill that.
AArch64 has a larger register file and fewer dependencies in general than x86-64 does. For example, most instructions don't set flags. I don't know for sure, but that might be enough to raise the ILP sufficiently.
NetBurst (like the P6 architecture before it) was 3-way decode/3-way retire. (Actually, NetBurst could decode just one x86 op per cycle into the trace cache; the trace cache could deliver 3 uops per cycle if there was a cache hit.)
Just speculating but surely it must have something to do with the entire stack being designed under one roof, no? Having the kernel devs be able to walk across the campus and ask the hardware guys what a register is for must speed up development immensely.
I'm not sure that would explain plain CPU performance as shown here. I think SPEC compiles down to native code, so there should be only a fairly limited part of the whole stack involved here --- compiler guys for generating good code, kernel guys for scheduling/power management, SoC team for CPU/memory/GPU subsystem implementation.
I also thought kernel devs would be working against processor ISAs, not hardware-specific details beyond the ISA.
To some extent. I seem to recall people saying that by restricting the range of page sizes they could make the L1 cache virtually indexed but physically tagged instead of physically indexed and tagged as Android phone processors are. That means you can start the lookup before the address translation is complete but still avoid aliasing problems.
EDIT: But I think another part of it is just being willing to throw more transistors at the problem than Android phone SoC manufacturers are and also that their higher income makes spending more on engineering make sense.
They are, but that’s really low hanging fruit. If Android for example doesn’t use optimised memcpy implementations then they don’t deserve to exist as a serious OS.
That reminds me, when I read fast code in the 90s-2000s all the asm hackers were into writing their own cool memcpy. Were they just showing off, or did Windows actually never optimize their standard library?
People still seem to like writing their own cool malloc, but memcpy not so much.
There are some cases where it may make sense to write your own implementation, if you have a niche microarchitecture that has unusual performance characteristics that the OS doesn't provide optimized routines for by default. But for most u-archs the default optimized routines should do a good job.
Things like malloc are quite a bit more complicated, and more workload dependent so there's still some opportunity specializing an implementation there.
The article gets into some technical aspects, but focuses specifically on A11->A12 changes.
Mostly, A-series chips have enormous L1 cache, great cache hierarchy and management, and very low memory latency. A12 specifically seems to have included an almost total redesign of the cache hierarchy.
I'm sure there are many more reasons their designs significantly outperform competitors, but I've not seen any more publicly available analysis.
> Mostly, A-series chips have enormous L1 cache, great cache hierarchy and management, and very low memory latency.
Has this always been the case compared to contemporary Qualcomm/Exynos/etc. SoCs? Not implying your statement is wrong here; all I know is that the A-series chips have had a big performance advantage for a while now and I haven't read more detailed analyses in the past that may have given hints as to why.
Also, how difficult is cache hierarchy/management to get right? For something as fundamental to good performance these days as cache, I would have expected the major players to be on more or less the same playing field.
I think part of it is that Apple aquired a very smart team of chip designers. That, and they probably give them a bigger budget probably explains most of the difference. Also, apple have pursued 2 (very) fast cores whereas other chips often have 4.
Of course, this iteration apple have beaten almost everyone else to 7nm, so the difference is much more dramatic.
>I think part of it is that Apple aquired a very smart team of chip designers.
Yes, PA micro was the place where the "last of Mahicans" of US chip industry were.
>Also, apple have pursued 2 (very) fast cores whereas other chips often have 4.
Yes, because people into app development are as web developers, and the word "mutex" gives most of them a panic attack.
Android style java should've been more multi-threading friendly, but that does nothing about people not utilising them.
> this iteration apple have beaten almost everyone else to 7nm, so the difference is much more dramatic.
Yes. I remember how Mediatek beaten Apple to 10nm thanks to them being a Taiwanese company, but nevertheless "ruined it all" with their helio x30's design being designed with more marketing considerations than engineering ones. Their marketing guys couldn't wait to announce "hey we have 2 more cores than you Qualcomm!"
> Has this always been the case compared to contemporary Qualcomm/Exynos/etc. SoCs?
Yes, that's my understanding. I've seen this discussion last year and the year before. And every time the answer seems to be caches. Cache memory is expensive. Apple seems willing to pay more for the SoC in order to have an overall experience that lets them get away with the high prices.
From what I read, Qualcomm would not be able to sell at volume an equivalently performant SoC.
Could it also be that a lot more of the applications are written and compiled to the native machine versus the overhead of a VM (even if that is JIT). I'd guess Minecraft pocket edition would perform similar on both systems.
Even if more things are being compiled to native code instead of Dalvik, I don't think that would explain the benchmark results here, as I think SPEC is always compiled to native code. It seems there's something more fundamental to Apple's hardware that is allowing for such insane performance.
As for Minecraft Pocket Edition, isn't that written in native code anyways? So I'd expect it to perform better on Apple hardware, assuming the hardware is actually the bottleneck for performance.
Looks like things are still compiled to Dalvik bytecode for distribution, but get recompiled to native code upon installation. I didn't know that; I don't follow Adnroid closely, so the particulars of the runtime aren't something I'm familiar with. Still, TIL. Thanks!
Some of it is likely just the high volume sales of a limited number of expensive phone models. They have the money to spend.
Android manufacturers have more competition, and have to address the low end of the market too. Their money, and attention, is spread in a wider swath.
I dunno if I buy this.
Sure the spread of Android devices and manufacturers is wider across the price and cost spectrum, but there are 'high-end' Android phone manufacturer as well.
Samsung comes to mind, surely it's big enough to produce high-performance phones that can compete architecturally with Apple's SoC as well as addressing the developing nation / low-cost phone market?
Right, but while Samsung's hardware team is focused on designing silicon for low, mid, and high-end devices throughout the year, Apple is building one chipset for two or three high end models every year. Apple just shoves last years models further down the price spectrum rather than launching new low and mid range devices every year.
The tighter focus, combined with the fact that Apple rakes in more cash to spend on R&D, is why their engineering team is able to win out here.
"But there are 'high-end' Android phone manufacturer as well"
Sure, yes, but not at a volume that allows them to create a processor that competes with iPhone only on their flagship model. Also, they can't charge $999 for their flagship. The average sales price for an iPhone is higher than the flagship model at Samsung.
Given Apple can get so far ahead of the competition with ARM for small devices... I still think it's entirely possible they're going to make their own CPUs for desktop Macs, and that is the massive hold up on the "New Mac Pro 2019(ish)"
The last time they changed architectures, they stuffed a new motherboard in an old case, to let developers get ready for the change.
I think it's unlikely they'll update their flagship Mac to a new CPU architecture without notifying developers first. On day one, all existing apps would perform poorly, and that's not how you promote a new top-of-the-line system.
Except they do it like their transition to Intel and emulate all software with the old architecture. But their new architecture would have to be very much better than intel. Or maybe they transpile x86/64 to their arm Rchitecture.
”This also gives us a great piece of context for Samsung’s M3 core, which was released this year […] Here the Exynos 9810 uses twice the energy over last year’s A11 – at a 55% performance deficit.”
Doesn’t that mean the A11 already is three times as efficient as recent Android cores? (The Samsung M3 was in January’s Hot Chips)
I don't think people realize how important this story is. Apple is about to take the single thread performance crown from Intel, who has held it nearly uncontested for, what, a decade or more? And they're doing it in a phone! Imagine what they could do on a laptop or desktop power budget. I'll bet you won't have to imagine much longer either, as I expect an ARM Mac within a few years.
I’m curious how many cores they’ll put in. Intel’s cores are 10x the size. Let’s say they fit 4x cores into a high TDP package. That would mean a 24 core Macbook Pro cpu and an 64 core iMac Pro cpu.
> Apple is about to take the single thread performance crown from Intel
I believe they already have that crown have for mobile/low-power, but for desktop/servers nop... not even close considering the microarchitecture their processor are being designed they won't even be close to what intel already does with a single instruction.
Performance can be approximately measured by instructions-per-clock * clock-rate.
But the GHz number on a chip only tells you its clock rate. So if you focus on this, you're missing a crucial aspect of performance (the instructions-per-clock part).
Chip designers can try to increase the clock rate to some insane number, but when they do that, it makes each clock cycle very short, leaving very little time to do productive work.
At any rate, within a particular architecture it's potentially meaningful to look at clock rate. You can expect a 2GHz Pentium 4 to be faster than a 1.5 GHz Pentium 4. It won't be a whole 25% faster, because, for instance, if the processor is waiting for data from RAM, it doesn't matter how fast its clock is ticking, the RAM chips are going to take the same amount of time to get the data that the CPU is waiting for.
But when you talk about completely different designs, e.g., Intel versus AMD, Intel versus Apple, etc., clock speed becomes really and truly meaningless. See for instance the "Comparing" section of https://en.wikipedia.org/wiki/Clock_rate
I don't believe that Apple invented something that Intel or AMD did not, at least not in 2x scale. Single-core geekbench score for i7-8700K is around 10000. It's around 5000 for iPhones which exactly reflects 5GHz vs 2.5GHz. When you're talking about cutting-edge processors, the only difference is clock speed and core count. And while core count could be increased relatively easy, AMD example shows, that it's not easy to approach higher clock speed.
But your actual performance is really more along the lines of clock speed times instructions per cycle. For example, a 5.0 GHz CPU that only achieves 1 instruction per cycle only does half the work of a 2.0 GHz CPU that achieves 5 instructions per cycle.
Some factors that deeply influence achievable IPC include at least (1) the cache sizes and layout, number of load/store pipes, prefetching relevance and timeliness, and overall memory system design, (2) the kinds, number, and capabilities of the execution units and the quality of the scheduler for these units, and (3) the front-end's ability to correctly predict branches, speed of recovery from mispredicts, and general ability to keep feeding the back-end of the machine.
Beyond that you really need to multiply in the amount of actual work done per instruction. A factor here is the instruction set and the ability of the compiler to utilize it to effectively. For instance, a "single" vector instruction may do an amount of computation similar to 4, 8, 16, or even 32 conventional instructions. Code that uses these instructions may get a massive speedup on CPUs that have enough execution resources to execute many or all "lanes" of the instruction in parallel...
Indeed in terms of cellular connectivity, the new iPhones boast a significant jump as we’ve seen an upgrade in download speeds to a gigabit for LTE networks.
I would like to see this speedtest and know the location, please.
This may be obvious to many but I just noticed that the iphone's default wallpaper is designed to hide the notch.(at least I assume the wallpaper in this review is the default)
It bothers me endlessly about Apple's new marketing materials. When comparing the iPhone X and XS on their site it seems like they got rid of the notch in the XS, even though the phones are almost identical.
My guess is that the notch was very much a marketing feature last year–it defined an entire category of smartphone design. Now that it's "normal" Apple has chosen to draw attention away from it.
It's even sneakier than that: the wallpaper on the XS hides the notch, while the wallpaper on the XR pretty much highlights the notch. I wonder which of the two Apple has a higher margin on...
Less cynical viewpoint: Apple is trying to highlight that iPhone XR has a notch "just like iPhone X does"–similar to the marketing material last year which didn't make any effort to hide the cutout.
To me the screen notch and the camera bump are frustrating design abominations.
We participate in a mass delusion where we pretend modern smartphones are edge to edge screen - but they can not be used like that. You either bulk up your phone with a protective cover or you inevitably break the screen. The super tough "gorilla" glass is not suitable for the intended use: using your phone every day in a real world environment. We all pay premium for thin, light, glass/metal, all-screen phones with notches and bumps only to hide them in second plastic bodies made in China.
I've not had a cover on an iPhone, and have owned every iPhone since the original. I've broken 1 (and a case wouldn't have saved it..it got run over). Not everybody is fucking gorilla handling their phones. Some of us can have nice things...
I used to throw my blackberry at people. One time it missed and it skipped over a natural slate floor like a pebble over water - no damage. Also dropped it on cement many times. Never a scratch (sometimes had to pop the battery back in)
Compared to my first iPhone 3G w/ rubber+shell case which I absolutely demolished in my jeans longboarding my first time. I still feel bad about that.
At some point phones became less durable than my own body. It's just different.
If you're bulking up your iPhone with a case, the camera bump is a non-issue.
I agree that we've reached diminishing returns as far as overall thickness is concerned, though.
Would love to know the split between cased and case-less phone users. I'm in the case-less camp, and even though I think I'm in the minority, I don't think it's as small as it actually is.
At this point I'm fairly confident that my XS is more powerful than my 2008 Mac Pro tower. How long before Apple goes full ARM for all of their future products? Owning basically the entire hardware chain.
I've no idea if the following would make real-world economic sense, but if their chip design truly is in a class of its own in terms of power efficiency then there might be a competitive advantage to using them exclusively in their own datacenters.
The iPhone prices are ridiculous. It can cost as much as 5 average Chinese Android phones. Maybe they don't have such fast CPU but still are pretty usable, and often have similar amount of RAM. Don't understand why people choose to pay more just to buy a device to browse Instagram or Telegram.
It seems obvious, to me at least, that within the next decade, desktop/laptop/server makers will jump ship to ARM. Cheaper, faster, lower power chips with fanless designs and longer battery life. I can't for the life of me figure out why Intel's stock is valued so highly today. What am I missing?
It's the same reason a lot of other tech dinosaurs are valued so highly. They've been around forever and people are betting on things staying the same rather than changing.
> Cellular ... UE Category 16 LTE (1Gbps) with 4x4 MIMO and LAA
Do cellular providers allow 4x4 (or even 2x2) connections? Do they provide gigabit bandwidth to individual devices? I'd be surprised if they simply handed out bandwidth to whomever could suck down the most.
Yes, because cellular providers now mostly bill by GB rather than unlimited. If you can suck down a GB in 10 seconds that's a GB of bandwidth they sold in 10 seconds instead of 1 minute.
iPhone X or Xs is no good and a UX step backwards...
- No I don't want to take hundreds of accidental screenshots a day & upload them to iCloud
- In traffic jams I want to grab my phone with my thumb and have it open right away as my iPhone 8 does. Not mess with it.. put it up to my face & push up to open it. Why add a step?
> No I don't want to take hundreds of accidental screenshots a day & upload them to iCloud
I hear this is a thing, and fine, enough people seem to have this problem that they should probably look at that. But why are you screenshots uploading to icloud? That should only happen if you actually take the step to save the screenshots?
There's a very long pause where they just hang out on screen before saving - it's at least ten seconds. If you tap them you can delete them from that screen.
What's making you accidentally hit the screenshot gesture?
I just upgraded and honestly FaceID is really nice. I loved TouchID but there were plenty of times it didn't work because my fingers were too moist or whatever. FaceID really doesn't have any issues other than using it too close to your face.
I wish you could teach it multiple faces, because it doesn’t work for me if I have stuff on my head, like a bike helmet and sunglasses, or ski goggles, etc
Huh. My iPhone X works perfectly with FaceID, since iOS11, clipped to my bicycle handlebars. I swipe up, glance down, and it recognizes me + bike helmet + polarized sport sunglasses + blue sky behind me. Seems like that's about as tough a use case as it could get short of a scarf covering my face (in which case I don't really think I want FaceID to work anyway).
Could be that your sunglasses block the faceID scanner. I have some brands of sunglasses that do not work with faceID, but other brands that work perfectly. I imagine it depends on the filter they apply to the lenses.
Would you care to elaborate on what you find objectionable about Face ID? Just like the fingerprint reader, all data is stored on the device secure enclave, and does not leave the phone. And it's been pretty much fast enough for me since the X. (I don't notice much change in speed for the XS but it's probably a bit faster.)
Not GP, but there are three big downgraded use cases:
- Apple Pay ("double-tap the side and look at the phone, then set it on the terminal" is not remotely as fluid as "rest your thumb on the fingerprint reader and touch the phone to the terminal")
- Unlock the phone while it is laying flat on a table/desk in front of you so you can read a notification without picking the phone up and drawing attention that you are doing this.
- Remove the phone from your pocket and press in on the home button with a registered finger so it is fully unlocked and 100% ready to go by the time you can see it in your field of vision.
Overall, however, I am much more bullish on the long term value of Face ID and the phone knowing whether you are paying attention to it or not that I have accepted these downgrades. There are also several upgraded experiences.
- A12 has 8MB on chip “SoC cache”
- Big core L1$ = 128kB; Little Core L1$ = 32kB
- For the big core, L2$ is a whopping 128 instances 6MB per core/thread, 8MB at 64KB/inst.
- Little core L2$ is 32 instances, 1.5MB per core/thread, 2MB at 64KB/inst
- A12 GPU uses memory compression!
- A12 Big has 2.38 GHz base clock and 2.5 GHz 1 core boost
- A12 Little has 1.538 GHz all core, and 1.562 GHz 2 or 3 core boost, and 1.587Ghz 1 core boost
- A11 and A12 have a 7-wide decode (up from 6 on the A10) and 6Int ALU (up from 4 on the A10)
- Apple’s microarchitecture seems to far surpass anything else in terms of width, including desktop CPUs
- SPECint/fp Numbers show that it’s got 2x the speed of and any all other mobile SoC’s. 3x perf/watt if you normalize speed / power consumption.
- SPECint/fp numbers also show that the A12 is faster than a skylake server cpu (in core-for-core IPC). Not a perfect comparison, but far better than Geekbench