Hacker News new | past | comments | ask | show | jobs | submit login
ARCore: Augmented reality at Android scale (blog.google)
393 points by transcranial on Aug 29, 2017 | hide | past | favorite | 317 comments



> ARCore will run on millions of devices, starting today with the Pixel and Samsung’s S8, running 7.0 Nougat and above. We’re targeting 100 million devices at the end of the preview.

That's...not great? For comparison, ARKit on iOS is going to support 400 million devices at launch (very rough numbers: ARKit runs on any new iPhone Apple's released over the past two years - iPhone 6S/SE/7 - and they sell over 200 million a year). Hardware fragmentation is a tough problem to solve.


The exact same thing stuck out to me as well. Mostly because they had to start the article with this sentence:

> With more than two billion active devices, Android is the largest mobile platform in the world.

So they'll get it onto 5% of active devices, or one in every twenty. Not great, perhaps not even good.

I've not been a fan of android fragmentation for awhile now, and have been surprised that Google hasn't been able to do more to attack the issue. Even when Treble launched, I asked myself... is that all? Rock and hard place I guess.

I actively avoid android devices because of the issue. My last android device was a tablet in 2011.


The purpose of ARCore is for high-end Android devices to be competitive with the iPhone technologically, not to get on 2 billion devices, at least not now.

Therefore it's fine that they get only 5% to start. I assume it's going to turn into 10%, then 20% quickly. For some perspective the iPhone only has about 15% market share globally.


> The purpose of ARCore is for high-end Android devices to be competitive with the iPhone technologically, not to get on 2 billion devices, at least not now.

This article is explicitly spun to promote ARCore as "Android scale" and leads with a sentence pointing out that Android is on two billion devices.

You're right, their ambitions are much lower, but it's fair to point out their actual goals aren't anywhere near the image they are trying to project and they really shouldn't be bragging about it being "Android scale" when it's nothing of the sort.


It'll only do that if ARCore itself takes off. With such small penetration, it's entirely likely that developers will just focus on Apple's ARKit.


likely that developers will just focus on Apple's ARKit

Most likely someone will just build and offer an abstraction layer on top of ARKit / ARCore and solve the problem that way.


Google have already provided this themselves, plus hooked it up to a 3D rendering engine:

https://github.com/google-ar/three.ar.js

That's assuming you're okay with JS, but if you're going cross-platform then it makes sense if you need to have interactive scenes.


The problem with WebVR, is that just like WebGL, it is capped, versus what is possible in actual OpenGL ES.


I've been developing WebGL software for a client for the last few years and my client has had quite good success selling content based on the software and consumers are using the content. The vast majority of browsers fully support WebGL (1).

The biggest performance bottle-neck is the JS itself, specifically stock three.js is quite inefficient. However, that's just a matter of optimising three.js, or using WebGL directly if you need more fine-grained control. You can get a lot done in a browser these days.


I have seen impressive WebGL demos, but for some reason WebGL always makes the GPU sweat more than plain native code driving my fan at full speed, even though it offers less.

Maybe it is a consequence of using JS, I have never researched it.


Writing the code is probably the easiest part of all this. Testing and design are going to be huge.


AR.js seems to be doing it at this very moment


And obviously a very good driver to get people to upgrade to high-end Android phones. There aren't too many use-case drivers for that right now.


Ditto. To add more wood to the fire consider that those 2 billion Android active devices are not “Google’s Android” devices. In China, where the largest bulk of those 2 billion devices are, Google isn’t allowed. Which means that SDKs like this one that are part of or require the Play Services framework are nowhere to be found. I would put Google’s Android active device numbers somewhere around 300 million. Translation - ARCore will show up in 1 out of every 100 users six months downs the line when Samsung, LG and other partners decide to release it. In the meantime, around Sept. 29 of this year Apple will have 100 million+ ARKit ready devices - based on the 80%+ install rate for new iOS versions - when iOS 11 is released. Just saying.


The 2 billion is the number of devices with the Play Store. Android devices in China do not have Google Play Store or Services on them. So devices in China do not factor into this number.


I'm still bitter how they abandoned the Nexus line in favor of Pixel. Hopefully things will be more stable with the in-house design.


> Even when Treble launched, I asked myself... is that all? Rock and hard place I guess.

Curious, what exactly were you expecting? There hasn't even been enough time for Treble to play out to see if it brings results.


As a user, how does fragmentation matter to you? It shouldn't affect you directly. The place where it matters is that you'll get less developers making software for you, but honestly, in the past 3 years, that hasn't really been an issue on Android. Except one or two big games like Mario, it's been fairly equal to iOS in terms of app support.


The developers support isn't just a minor quibble, especially when it comes to quality of life factors like browser versions (e.g. LG, HTC, Samsung, etc. have all shipped Chrome builds with subtle differences). This also applies to optimizations for CPUs, graphics hardware, etc. — the app may be available on both but it almost certainly runs better on iOS.

The other problem is that fragmentation increases the odds of a vendor walking away from a device since the per-model sales are divided more ways. Trying to pick a device which will get updates basically involves sales forecasting which wouldn't be necessary if the platform was open.


Fragmentation impacts which device you might want to buy. The biggest argument for Android is the wide choice of devices and features on those devices, but if your phone never gets an update, you're left behind. It's possible to get a really nice and inexpensive Android phone, but what's the risk of never getting an update?


Not only that, but you can quickly solve that issue by going with Pixel/Nexus devices. So I'm not sure that the criticism is well-deserved.


It certainly is an issue if the entire Android ecosystem of devices is reduced to just one niche $650 phone.


Sure, but you are also comparing it to a 700$ phone (the iPhone) and I don't think it's fair.

Like, obviously, iOS will have a cleaner and less fragmented ecosystem, because they support far fewer devices (all of which they own).


You don't have to avoid Android devices, just buy Google ones. Most of the others are filled with crapware and bad customizations anyway, no idea why any savvy HN reader would want those.


Which is easy to say, if you’re on silicon valley wages.

But a student trying to develop for Android can’t afford devices that start (!) at 908.35 USD. (Price of the Pixel 5" with 32GB storage, in Germany, right now).

And Google just dropped support for my phone, the Nexus 5X (which I bought for 324 USD, on sale, just before they stopped selling it)


How is it any worse for iOS? You even have to fork out $2k for a Mac as a dev machine!


This hardware [1] is a bit long in the tooth but you do not have to pony up $2K to get started.

[1] https://www.apple.com/us/shop/buy-mac/mac-mini


With iPhone's they get around 4-5 years of updates compared to the 2 years promised with Google's devices.[1]

Buying into iPhone development might be more expensive upfront, but you get a longer update period (which is obviously important for developers).

I paid around $725 USD for a 2012 Macbook Pro which can develop for iOS and it has 16GB of ram, an SSD and a hard drive for time machine.

[2]https://support.google.com/nexus/answer/4457705#nexus_device...



I never developed for iOS, because a platform that has in my home market (the EU) a marketshare similar to what Windows Phone 7 had in the US (below 15%) is useless. Especially when development starts upwards of $3000.

I’m comparing between developing on Google devices, and non-Google devices.


Your numbers are incorrect, iOS market share is around 22% in the EU5 (GB, DE, FR, IT, ES) according to https://www.kantarworldpanel.com/global/News/iOS-and-Android.... And according to https://www.kantarworldpanel.com/global/smartphone-os-market... Windows Phone's market share (as opposed to device sales) in the US peaked at 5.6% in 2013. That's a factor of 4. You may not like iOS, and there's nothing wrong with not considering a platform for ideological reasons, but it's not irrelevant.

Also, a used Mac Mini isn't $3000.


My numbers are those the EU Antitrust committee recently published. Below 15% was the number they used as argument for why Android is a monopoly.

> Also, a used Mac Mini isn't $3000.

A Mac Mini that’s still under warranty, and an iPhone, and some years of Apple Developer fees, are.


A new mac Mini with 8GB RAM is 819€ in Germany. A new iPhone 7 32GB is 759€. That's 1578€, or $1880. Add a one-year developer subscription and you're still below $2000. Your claim is just plain wrong.


The developer subscription isn’t just 99$. You need to pay with a credit card, the cheapest credit card available in Germany has an annual fee of 39.99€, unless you move > $1300 a year via a CC. (I don’t have a CC, neither do 90.1% of Germans)

So, you have $150 annual fee just for the developer account. Plus an additional massive cost every I time I need to replace the device.

The starting costs are significantly higher; maybe not 3k, maybe just 2k, but that’s the same anyway: Completely unaffordable.

And running costs are massively worse, too.


> the cheapest credit card available in Germany has an annual fee of 39.99€, unless you move > $1300 a year via a CC

Only if you ignore the free credit cards that are independent of a current account from Advanzia, Santander Consumer, Barclaycard, Hanseatic, Payback, Targobank, ICS, and some I forgot.

Then there are also a lot of unconditionally free current accounts that offer a card (credit or debit) that can be used to pay that subscription, such as those from DKB, Comdirect, ING-Diba, Consorsbank, Wüstenrot, Norisbank, Fidor, N26, and so on.

Those offers are not hard to find, some of them are available for over a decade or more now. Even pretty much every consumer bank you can name in Germany has such a card on offer for less than 40€ yearly. That includes small cooperative banks. If yours does not you should consider switching (or opening a second account), competition is fierce.


None of them have free credit cards.

Santander, Barclay, Hanseatic, Targo, DKB, Comdirect, ING-Diba, Consors, Wüstenrot, Noris, and N26 require that you have at least 600€ each month of movement via the bank account, or at least 1200€/year via the card.

I’ve tried getting a card from all of them. All of them refused, as I’d not use them enough.

> If yours does not

I’m a customer at 2 major national german banks (postbank and commerzbank), and a local one, none of them (or their subsidiaries, such as comdirect) offer this.

I just want a credit card, I’ll use it at most once a year, for at most 100 bucks. That is not available from any bank.

The only offer that exists even similar in cost is the old MyWirecard offer, and they closed my account due to not using it enough.


All of the ones I listed have at least one free offer without 600€ requirement, not sure you got that idea from. I am a customer at some of them without using the account at all and don't pay any fees. Some of the banks are quite picky and refuse customers for no apparent reason but none of them I know even asks how much money you plan to spend with it so that couldn't be a reason. If all of those banks have refused you there has to be something seriously wrong with your credit report, I would get a free § 34 report from Schufa and Creditreform to check if everything is correct if I were you.

Postbank has a Visa or MasterCard for 29€ yearly. Comdirect has a free Visa Debit card. I have it and have not used that account for over two years now apart from downloading the statements. Nothing happened so far and I haven't paid anything.

In your case the easiest solution would probably to order the British prepaid card from Revolut which is available for a one-time 8€ fee and I don't think they refuse anybody. Just note that they are not regulated as a bank but as electronic money so I wouldn't leave large sums of money for a long time on the card (but that's not something you want to do anyway).


> If all of those banks have refused you there has to be something seriously wrong with your credit report, I would get a free § 34 report from Schufa and Creditreform to check if everything is correct if I were you.

My credit report is perfect, and I’ve had no issues with that – but the problem is that I, as student, currently have zero income, and zero spending.

> Comdirect has a free Visa Debit card

Only in combination with the Girokonto, which is only provided if you have any monthly income. Same with Hanseatic. Check their AGB, or call them.

> Postbank has a Visa or MasterCard for 29€ yearly

Same story with this, I’m a customer at Postbank, have a Sparbuch in the 5 digits, but I can’t get a free Visa or MasterCard until I’ve > 670€/month income, and I won’t pay a single cent for a credit card.

> In your case the easiest solution would probably to order the British prepaid card from Revolut which is available for a one-time 8€ fee and I don't think they refuse anybody. Just note that they are not regulated as a bank but as electronic money so I wouldn't leave large sums of money for a long time on the card (but that's not something you want to do anyway).

I’ve had one with them, too, they closed the account because it wasn’t used often enough.


Oh quit the nonsense! I'm paying nowhere near 40€ for my credit card in Germany (and have no minimum transfer volume requirement). Also, ⅓ of Germans have a credit card: https://www.bundesbank.de/Redaktion/DE/Pressemitteilungen/BB..., and it's not like this is the only occasion where one might be useful.

Not to mention that all that gives you the ability to, you know, sell apps on the App Store and recoup your expenses. $2000 is a very reasonable entry price for something you could be making your living with. It's expensive for a hobby, granted, but it's not like the stuff you buy is single-purpose. And it keeps resale value amazingly well.

You can really stop it with the made-up numbers now. This is ridiculous.


> but it's not like the stuff you buy is single-purpose

For what I am doing, it literally is.

> $2000 is a very reasonable entry price for something you could be making your living with.

For a student that has 50€ left each month after rent, food, etc, $2000 is quite a fucking bunch of money.

> Oh quit the nonsense! I'm paying nowhere near 40€ for my credit card in Germany

Which bank? A credit card without being required to be linked to an account with a certain monthly or annual transfer?

> Not to mention that all that gives you the ability to, you know, sell apps on the App Store and recoup your expenses.

Which is so useful when the whole goal is to port existing GPL'd open source apps for my projects to iOS, and provide them for free.


I bought a Nexus 5 (that was supposed to give you the Google experience) after my iphone 4 and was the worst phone that I ever had. I barely managed to not throw it against the wall for several times. After less than 2 years I sold it and bought an iPhone 6s that I still have and I use happily without any "phonicidal" thoughts.


My Nexus 5 has served me well for many years without incident. My next phone will be a Pixel 2.


I wasn't comparing Google Phones to iPhones, but to the rest of the fragmented Android market. iPhones don't come with bloatware and crapware either. But what issues did your Nexus 5 give you?


Not the parent comment, but my Nexus 5 power button got stuck and bootlooped my phone.

Replaced the motherboard myself, a year later, same issue.

Bummed about it because otherwise it's a good phone at a great price. Only thing I complain about other than that is how terrible the camera is, but for a cheap phone that's a few years old, it's unfair to compare to today's devices.


Let me know when google goes back to making devices with a headphone jack, an SD card slot and a replaceable battery.


Personally I would prefer them to target the top devices with a great product instead of delivering a mediocre product that all devices can run.


That doesn't make it very appealing for app developers, though, if the size of their base is dramatically limited to just people with both the latest and the greatest devices.


I'm not sure what's unappealing about targeting users that can afford to always have the latest and greatest. That's effectively exactly what you're doing if you choose to Target iOS over Android.


I guess the problem is that (going by Google's target of 100m users with this feature by winter) Apple have vastly more of those users.


Considering Android apps make 15% of the money iOS apps make, perhaps it is for the best to only target devices that happen to be owned by people who spend money.


If android apps make 15% of what iPhone apps do despite having a much larger market share, what would be the point of targeting a smaller fraction of that? Even if those users are all willing to pay you're probably not going to get higher than that 15%.

The "high-end" at market on iOS is bad enough, would you want to try and fight that battle on android?


Today's "a few" is tomorrow's "everyone", this has been proven before (the original iPhone, car technology, etc.). You take the time to perfect what you build with a smaller audience and more powerful technology, and eventually there will be more people to sell it to.


The question is if you're trying to earn your living doing that can you wait long enough for "a few" to become "everyone"? There are a lot of businesses that fail because they're too early.


I think it runs on every A9 (iPhone 6S) powered phone and tablet or newer. So more or less every iOS device sold since 2015/2016.

Yeah and fragmentation in Android versions is pretty bad. Also the quality of the underlying hardware might be not as consistent as in iOS devices. It relies on a lot of sensors to get it right, not just raw CPU power and a nice camera.


> It relies on a lot of sensors to get it right, not just raw CPU power and a nice camera.

The Android CDD maintains requirements for sensor accuracy and performance: https://source.android.com/compatibility/android-cdd#7_3_sen...


It does, yes, but the CDD standards are much worse than a 2015/2016 iPhone, or a recent Google Nexus phone.

e.g. lots of Android phones basically can't hit low latency audio targets that iOS has been able to hit essentially since launch.


iOS has had a ton of effort put into its audio stack - it was an iPod replacement as well as a phone. Remember when the app was still called iPod?

iPod users wouldn't have taken the iPhone as seriously if Apple hadn't put in the work to make audio a first-class citizen. Android doesn't have that pedigree/baggage so it's never been as important.


Only on HN would targeting 100 million devices for a preview be seen as small fry.


The sad part is that they aren't serious to sort it out.

Treble could have been the solution, yet they are still allowing for OEM customizations and the OEM are the ones expected to keep pushing the updates, as discussed on the ADB Podcast.

Guess how many OEMs will choose to push Treble updates instead of selling new devices.


Software updates alone can't fix it as you can ship all sorts of hardware with Android. Apple has a known combination of hardware across its models. Even subtle differences in imaging sensors would be hard to account for.


It's mind-boggling how Windows Updates has been successful for decades now in the even more fragmented PC world, but Google gets a free pass because... reasons.


No software update in the world is going to help when there are thousands of different sensor configurations in use. Apple only has a handful and can test against all of them with ease.


That explains AR, what about basic android functionality being stuck because your device isn't updated?


Correct.

Google could have taken the same approach as Microsoft always did since the MS-DOS days and require specific hardware expectations.


Simple explanations for complex issues (e.g. getting many vendors and layers of the vertical cooperating) are seldom of much use. Bringing up Microsoft -- they tried the whole fixed hardware thing with Windows Mobile cum Phone and failed catastrophically, always two steps behind -- is not convincing.

Android solved a very different problem than iOS. It has compromises and benefits (that iOS has enjoyed by proxy) in doing so. The OS that we know today is the combined knowledge of many OEM customizations. The hardware of a vicious battle between many vendors. The screens and designs the result of an endless tug of war. And again, Apple has enjoyed the fruits of these.

But saying "If only they..." is seldom from a place of reason.


I have not mentioned Windows Phone on my comment.

Apparently no one at Google ever read PC hardware specifications from Microsoft like "PC 97 Hardware Design Guide" or "Hardware Design Guide for Microsoft Windows 95", among a few other hardware guides from them.

https://www.amazon.com/Hardware-Design-Guide-Microsoft-Profe...

https://www.amazon.com/Hardware-Design-Guide-Microsoft-Windo...

Of course the Windows Phone attempted at it failed, because by the time Microsoft tried to apply the same rules as they did on the PC hardware, OEMs already had the Android wild west to play with.


I don't think you can judge Microsoft's approach on update management based on the failure of the platform. The platform failed because there's a monopolist in the space with an 85% market share and huge network effects regarding app development.

Microsoft hit update handling for Windows Mobile spot on. A 2014 Windows Phone today[1] is a safer bet if you value security than any Android on the market right now. May not run the apps you want, but it is going to have the latest security updates.

[1]My Lumia 929 currently is running Windows 14393.1593, released August 8th, 2017. I can safely expect to receive the next update on September 12th.


The platform failed because there's a monopolist in the space with an 85% market share and huge network effects regarding app development.

When Windows Phone 7 was released (this is ignoring that Windows CE was around for about a decade before, giving Microsoft a massive headstart), Android had about 18% of the market. Microsoft leveraged their enormous base of desktop developers, pulling them through various failed initiatives, to get them developing for Windows Phone. Android started effectively from nothing.

In the context of actual history your post is simply surreal.


In the context of the discussion, your post doesn't make any sense. We're talking about how Windows Mobile fixed updates... which happened multiple years after Windows Phone 7. Windows 10 Mobile is the first version that doesn't handle updates through OEMs the way Google does it. (Even the stricter hardware requirements didn't really set in until Windows Phone 8.)


As I understand it, the OEM modifications will be relegated to the vendor/odm partitions. They specifically hint at being able to push new system partitions directly in the future.


1 - One needs to buy a new Android 8 device.

2 - Android 7 has currently about 13% acquisition after one year

3 - There is currently no obligation for OEMs to provide updates

4 - Given the failure of security updates, only being delivered to flagship devices, I believe Google will ship them directly when they actually do it.


1 - Which OEMs will be pushing consumers to buy, according to you.

2 - Right, this is something Treble will attempt to solve.

3 - Which is why Google is making it as easy as possible for them to do so, right down to actually pushing out the updates for them.


1 - I guess only Google with its rather expensive Pixel device, and probably Samsung and Huawei flagships only, in the same price category.

3 - I would really like to be proven wrong here. Still waiting for my Android 7 update on a newly bought device.


None of that actually seems to solve the issue of developing updates and pushing them to users, though.


Actually it does!

For example, something I was excited to see is in the updated documentation on source.android.com with regards to how they recommend developing kernel branches alongside devices: https://source.android.com/devices/architecture/kernel/modul...

Basically, one of the issues that's plagued OEMs in the past is the fact that each SoC vendor (e.g. Qualcomm, Samsung, ARM) would not only have a kernel for each SoC, but each OEM device using that SoC would be another branch on that SoC-specific repository. This would then be compounded by the OEMs having their own kernel repositories that pull from the device-specific branches of a SoC repository, and bug fixes from OEMs would have to trickle back upstream to the vendor repository before making them back to other device-specific kernels.

The new proposed model would see a single kernel soruce for a given SoC, and device-specific changes would come in the form of kernel modules, config overrides, and device tree overlays. They are also strongly recommending SoC changes to the kernel be upstreamed to mainline Linux, which would greatly ease porting to newer kernels in the future.


Recommendations are just that, advices that might be followed, or not.

It isn't the first time that Google does recommendations to OEMs.

They also did recommendations for security updates. An advice that OEMs only follow for the customers that paid the big bucks for the flagship devices.

So unless they actually provide requirements instead of recommendations, nothing will change for consumers.


So how come these devices account to "millions of devices" or "targeting 100 million devices"? AFAIK, Pixel devices and S8s combined would be much fewer than 100 million devices. Is this even the correct number?


So what's the alternative? not allow your operating system to run on low spec phones?

Stop selling low spec phones?

of course it isn't gonna run in a supported configuration on some dumb phone from 5 years ago. those people can't afford or don't care about VR.


Yeah, I don't think shipping for all Androids would have been realistic. AR requires pretty tight tuning for the available hardware stack and I guess Googlers just tuned it for their own phone and the most popular flagship for start.

Apple has significantly easier job here - especially since it doesn't need to get OEM buy-in for these kind of things :/


Does this mean that the lower end OEMs may not provide this on new devices at all? Or if they do it themselves that it may be a very low-quality calibration so the feature doesn't work well?


Question on a diff layer:

first, 100MM or even 400MM devices is no fucking joke...

but Q is: bandwidth to deliver drivel to the devices... not the apps - but how much bandwidth will ads consume over consumed content.

If I were a VC, I would be purchasing every-single-pipe-in-existence for the future.

We went from pipes-are-pope to content-is-king -- but the fact is that the control of information is in the plumbing ( at the high level ) -- not the holders of attention... they are just firewalls.

Facebook is a firewall.

Google is a firewall.

Reddit is a honeypot.

etc...

you get the analogy.

so...

Who owns the pipe.

ISPs were vilified... FFs the NSA sent the head of Qwest to prison for not spying... the scientologists refuted the NSA on carnivore (because they were already monitoring the networks for clams) -- the .gov went after the guy who detailed all fibers.. -- how many more examples would you like...


> ISPs were vilified... FFs the NSA sent the head of Qwest to prison for not spying... the scientologists refuted the NSA on carnivore (because they were already monitoring the networks for clams) -- the .gov went after the guy who detailed all fibers.. -- how many more examples would you like...

Can you explain your last paragraph a bit? It's a bit telegraphese and I'd like to know to what individual things you are referring. Thanks!


Sorry for the delay, been a long day...

---

NSA had been trying to install network monitoring boxes in all the places.

this was discovered, after a whistleblower from AT&T mentioned this happening in the MUX/IX in the infamous room 641A

https://en.wikipedia.org/wiki/Room_641A

---

There is way too much to educate ppl in one post on the evolution to the above, but I can go back decades if I have time - and ppl have interest...

Anyway - the NSA was running all over and installing spying boxes... they got caught.

The CEO of Qwest was convicted on "tax fraud" or some such after he had the balls to stand up to the NSA and not install their shit.

Aside: Qwest communications was founded as a splinter from a railroad company who realized they had right-of-way easments along ALL their rail lines -- which was easy to lay fiber alongside, and then build a comms co on that backbone.

Earthlink, which was funded early by key hollywood scientologists, refused to install the boxes as well -- but because they were already spying on all their user traffic to hunt for "clam" people (clam people are those that made fun of scientology due to scientology's belief that humans are evolved from clams - among a great many other weird beliefs... anyway they werent prosecuted, presumably due to holywood big-shot money... etc...)

---

There was a student who was able to look up all the public-works filings for all municipal and federal digging plans/permits/etc and wrote some detailed report on the critical network infra of the US, thus revealing its weaknesses -- the report was nabbed as "national security" as "terrorists" might attack it.. (though they didnt really report on the fiber cuts which caused major upheaval and outages in silicon valley) -- but they shut that kids thesis (i think it was a thesis) down...

The revolving door with the NSA and silicon valley is wide open, obvious and pretty big...

if you want more info. email me... revealing too much on HN is stupid.


Thanks for the extra info. Is your email foo@sstave.com ? Who was the student, do you remember? The Scientology thing is bonkers.


Is there a third-party AR library that can be bundled in an android app, using existing cameras/sensors? I wonder.


Is it only Tango?


It's not Tango at all.


Typical google bs. Why not S7? It runs almost identical CPU/GPU as pixel. And it runs 7.0. DoA.


Could be a camera or sensor quality/feature issue. Or could be they just haven't built a lens map for it yet, so they don't have fov and distortion metrics.


S7 is a bit old by today's standards


That was last year's phone. Apple is supporting the 6S phones which will have been replaced twice (7, 7S) after iOS 11 ships.


The hardware is identical to pixel.


Occasionally I'll write an app for my kids or wife. Every time I'm thoroughly impressed by the Apple development ecosystem and thoroughly disgusted by Google's for Android.

This is no different. The Android development process is painful (the most verbose, cruft and boilerplate filled Java), cumbersome to organize and build (Gradle is terrible, and buggy) and debug (the integration with Studio is just clunky). About the only thing Google does better is testing releases through the developer console.

It's nice to see them finally providing something similar to ARKit. I just wish they'd work on all the other things that make Android development a horrible experience.


Having developed for Android for many years and for iOS/Swift for over half a year, here's a bunch of random thoughts:

Developing for iOS/Swift is a more pleasant experience, the APIs are simpler and more well-thought. But it's not a huge difference.

Android Studio is a better (but uglier) IDE, but not by as much as people say.

Android's Activity and Fragment APIs is a horrible joke. I mean, Google is supposed to have these super smart devs, how could they design this mess?

Creating UI in iOS sucks. Specifically, Auto Layout and Interface Builder. I usually use Fb's Yoga, it's not ideal, but less of a pain.

Swift is one of the best-designed languages out there.


There's a lot of complaints about Java being verbose and bad, does Kotlin fix these issues? Does it make it fun again? I personally used to dislike programming until I started using Python at work, and I fell in love again. Syntax, libraries and API design can definitely have a huge impact on how you enjoy what you do.

As for your comment about Activity/Fragments, they've had a few I/O talks going over their thoughts:

https://www.youtube.com/watch?v=k3IT-IJ0J98

https://www.youtube.com/watch?v=FrteWKKVyzI


I would argue Kotlin is a lot more fun than Java. As for the Fragment lifecycle, it still sucks, but no one forces you to use it. Lots of apps never even use one fragment. Google simply made fragments hoping everyone would make their UIs into reusable components which work better on tablets.


The problem is the benefit you note requires a lot of extra effort and at least one additional layer of abstraction that more than doubles the boilerplate/cruft required to maintain proper UI flow through an application.

Their "solution" is more of a problem than the problem it was meant to solve.


I don't think Java 8 is that bad, it's just mediocre. It's fast and has a good ecosystem.

Yeah, Python is fun, it seems effortless to do stuff there compared to Java. For example, "a = [1]" vs "List<Integer> a = new ArrayList<>(); a.add(1)" - and then you have to compile and run, no REPL to try it out.

Recently I've been using Julia, it's maybe the most fun I had with a language. Some things are much easier to do than in Python - and it's 1 to 2 orders of magnitude faster.


Since you mentioned Julia - are you using it for math or math-related programming, or doing something else? It looks interesting to me, but I'm a web dev (Python on the backend) and don't have much use for most of the math stuff, so I don't know if it's suitable.


The language itself is general purpose, but its community is targeted towards scientific computing, so the ecosystem for web dev isn't (yet?) there.


* Swift is one of the best-designed languages out there. One feature I dislike about swift is the Optional type and the "?!" business, borrowed from Scala, I assume. It makes coding experience much less pleasant, yet providing little value-add. What is wrong with good old nil?


https://github.com/facebook/yoga/blob/master/PATENTS

That's a pretty big price they charge.


I find creating UI in Xcode, with Autolayout and IB, far better than using Android's tools. Mainly because IB actually shows me what I'm working on, and what it's going to look like. The Preview window in Android studio routinely errors out, telling me it can't preview my layout because of some error, often caused by something from Google's own compatibility library!


Is refactoring in Xcode using Swift still a horrible joke? It just seems odd that you would say Android Studio is not that much better than Xcode considering the horrible inadequacies of Xcode when it comes to Swift. Meanwhile, pretty much anything you can do with Android Studio, with respect to Java, you can also do in Kotlin.



Not anymore, Xcode 9 has some pretty great refactoring features, but it's still in beta.


Eh, they've both got their pros and cons.

iOS:

+ Swift

+ Less screen sizes to worry about

+ Less iOS versions to worry about

+ XCode is much lighter on resources

- Mac only

- XCode might crash every now and then

- Probably need an iOS device, as the simulator is very slow

- $100/yr developer fee

Android:

+ Kotlin

+ Studio runs everywhere

+ No developer fee

+ More stable IDE

+ Decent emulation

- Countless screen sizes to worry about

- Lots of Android versions to worry about

- $25 developer account fee

- Android Studio is resource heavy

- NDK requires lots of JNI boilerplate

Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it will probably be very game-y and such.


> - Probably need an iOS device, as the emulator is very slow

Only if you're doing OpenGL, as that's actually rendered in software (not sure about if you're using Metal). Otherwise, the simulator (not emulator!) is fast, which is as it should be as it's running native code (which is why it's a simulator, not an emulator). If anything, it's important to test on devices because the simulator can mask performance problems (though as devices become more and more powerful that becomes less of an issue).


AFAIK, there is no hardware acceleration whatsoever for graphics in the simulator, it's all software rendered.


I'm sure you're correct, but that seems backwards. OpenGL is a cross platform API, why would that be emulated in software? And the iOS devices have a different CPU architecture (ARM) than the MacBook Pro (x64) you develop on, so how is that being run as native code?


I can only speculate as to why OpenGL is rendered in software, I assume it has to do with the fact that the graphics capabilities are different on iOS versus Macs, and perhaps software rendering is needed to ensure consistent behavior or implementation of OpenGL extensions (though I'm not positive that the simulator offers the same OpenGL extensions anyway; I'm not a graphics programmer so I haven't really explored this).

As for it being a simulator, it's because your app actually compiles to x86_64 code when you're targeting the simulator. When you switch between targeting the simulator and targeting a device, your app is recompiled for the new architecture. And the simulator includes a complete copy of iOS and all its frameworks (but without most of the built-in apps) that were compiled for x86_64 in order to run in the simulator.


OpenGL is not being emulated, it's being implemented. OpenGL is just the API and specifications, it's up to individual graphics hardware vendors to put a conforming OpenGL implementation in their driver. Ideally it would all be done very fast in hardware, but there are still times when a particular feature can't be done on certain hardware, and is performed in software to conform to the OpenGL spec.

Apple already has a complete software OpenGL implementation, which they may have modified to simulate the individual OpenGL ES implementations for each iOS device. This also has the advantage of removing the developer's hardware from the equation: If they want to test a bleeding-edge OpenGL ES app on a really old MacBook, it'll run - just slowly.


When you compile for the simulator, the binary output is x64.


Anyone that uses the excuse that there are "Countless screen sizes to worry about" makes me wonder if they even know what they're talking about when it comes to Android development. The developer of Pocket Casts says this is a fallacy:

https://rustyshelf.org/2014/07/08/the-android-screen-fragmen...


To whatever degree it was true it was a lot MORE true when you only had to worry about the original iPhone screen, or the original plus right now, or maybe the five.

It's no longer just one or two sizes on iOS. Probably a minimum of three assuming you don't want to do a tablet app, and that may change into weeks.


I'd also add: NDK uses several different flavors of libc++, each screwed up in its own unique way. Apple uses a bog standard libc++.


It provides several different flavors of the C++ STL, including libc++. `libc++` is specifically the implementation shipped by LLVM/Clang (which is also what Apple uses).

The next release of the NDK (r16; due out sometime this quarter) will stabilize libc++ for use by NDK applications and it'll be made the default STL in r17 (which will be out by the end of the year).


That doesn't change the fact that C++/NDK development has been busted for years, for no apparent reason other than Google not paying attention.


If you check the ARCore announcement, "ARCore works with Java/OpenGL, Unity and Unreal and focuses on three things:", shows what the position of the NDK is.

Nothing more than a way to implement Java native methods, which I completely ok with, given the security implications.

Just wished that since they have their own fork of the Java world, that they would also bother to provide something else besides forcing us to manually write JNI calls.


>Countless screen sizes to worry about

I think you are taking it the wrong way and you are not at fault, there is a lot of FUD about screen sizes.

Designing for flexible screen sizes and densities is pretty easy and I don't think I would gain any significant amount of time if Android was limited to 10 screen sizes.

You just think in term of dimension independant pixels & inflexion points where you adapt your design (one more row, multiple panes, etc)


Which really you're supposed to be doing on both OSs at this point. As an iPhone user there are still apps that don't support the 6 size screen and it's obnoxious.

iOS has four different retina screen resolutions on the phones, more on the tablets. For all we know there will be more in two weeks. Creating pixel perfect layout doesn't work very well anymore.

It may be easier to exhaustively test on iOS because there aren't quite as many variations, but devs should definitely be using flexible layouts.


Completely agree, I only mentioned Android because of the 'screen size hell' myth.

Another argument is that you want to future proof your app.

Flexible layouts all the way.


And then of course there's accessibility. If your app is already designed to handle different screen sizes then it's easier to resize various elements because you want to bigger or smaller text.


Android:

- Developer fee of $25 if you plan to actually publish to the store

- Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

- NDK requires lots of JNI boilerplate to call about 80% of Android APIs


> - Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

I'd disagree with the Xeon bit, I have a 6 year old Sandy Bridge quad core, and Android Studio runs butter smooth.

I'll confess to the 16GB of RAM and an SSD though. Although honestly an SSD now days is required for anything to be usable.

Android Studio is amazingly performant though, the Emulator is great, ignoring bugs and glitches and the occasional times it just stops working until I flip enough settings back and forth that it starts working again.

Of course a huge benefit is that I don't need Apple hardware to develop for Android.


The main issue I have seen is that people don't know how to configure Android Studio & Gradle memory consumption.

Granted, they should not have to do that in the first place but once done correctly, it makes AS fly even on very large projects.


I also rather develop for Android, but Android Studio resource requirements made me appreciate Eclipse again.

Apparently AS 3.0 will be better on that regard.


> I also rather develop for Android, but Android Studio resource requirements made me appreciate Eclipse again.

There is a reason my dev machine is a Desktop. Better keyboard, better monitor, better performance. 6 year old machine, cost about $1500, performs better than the ultraportables a lot of people try to press into service for writing code. Even with a faster CPU, thermal throttling is a concern once the form factor gets to a certain size.


We don't get to chose what we get.

Usually the customer's IT assigns hardware to external consultants.


Ah interesting, when my team used external consultants, we did the inverse, we gave the consulting company a beefy requirements list and told them anyone sent to work for us must be at least that well equipped.

Paying by the hour, we were heavily motivated to minimize compile times. :)


>Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

I've had no problems using Android Studio on my Mac with 8GB. On a side note, the Android emulator even started faster than the iOS simulator. I also found it odd that the Android Emulator seemed to consume less resources than the iOS simulator which was taking up about 2GB of RAM.


Thanks, it's a one-time payment so I completely forgot about Android's developer account fee. And I'll add the other points too.

Edit: Also, how does Xcode compare in performance? It seems lighter, but I only have a pretty recent Macbook Pro to test on (which also handles Android Studio just fine)


> Also, how does Xcode compare in performance? It seems lighter, but I only have a pretty recent Macbook Pro to test on (which also handles Android Studio just fine)

Depends on what you open with it - for ObjC it's usually faster and smoother, for Swift it tends to be slower (about on par with AS Kotlin plugin) and for C++/ObjC++ it's horribly slow just like any other IDE out there :/


Much lighter, the iMacs at the office still handle it.


> Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it will probably be very game-y and such.

You are confusing VR and AR. AR has a ton of legitimate use cases outside gaming:

- https://storify.com/lukew/what-would-augment-reality

- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit


I read that as AR development having more in common with game development, which makes sense given the emphasis on performance, 3D rendering, and latency.


Edit: Ninja'd by Crespyl :P

Yeah, I should have said it could benefit from using a fully-fledged game engine, considering you'll likely need to import 3D models, have objects interact with each other, and so on which Unity would help a lot with.

However, I primarily do experimental work in those engines, so I'm pretty biased.


VR also has a ton of non-game use cases. This is one of the biggest issues with VR marketing. But yes, as other commenters have mentioned, AR development still benefits from using a game engine (maybe rebranding 3D application framework would make it more palatable to "serious app developers") because you probably want do work with 3D models\rendering.


On the other hand - I find its the other way.

Xcode is the most "diff" IDE from other IDEs. I like Swift but just dont like Objective-C. The build tools and ecosystem is too tightly tied (I like to switch between development machines without having to always be on a mac).

Java is definitely painful, but I suppose the bias I have here is that I have developed on it for several years.

The breath of fresh air so far has been React-native and i wish more things get ported over to JS (or like Expo kit).


"i wish more things get ported over to JS" Please, please don't say this out loud. It might come true.


:-) - Strangely, I am looking forward to this. I think it just comes down to the fact that I am very comfortable with JS and node ecosystem and prefer that over all other mobile platforms atm. I also think maintaining one (almost ~80%) codebase for both platforms is a significant advantage.


Xcode is best for things like managing certs, not actually working in. The intellij version is pretty good; it integrates well with the build system and debugger.

I find it curious you bring up Expo as i've found that is the most opaque and user unfriendly IDE i've used in some time; i don't get why they don't just leverage VSCode and quality tooling over Yet Another Goddamn IDE.


Curiously, I feel the opposite way–Xcode's support for managing certificates is pretty awful (it's gotten better recently, but it still occasionally gives cryptic errors for no discernible reason). For actual programming, though, it works pretty well as long as it doesn't crash.


To clarify - Expo as a framework, and not XDE. I think they have made Expo eject a bit cumbersome but works with some wrangling. I like the the Expokit framework in general but don't want to be tied to the Expo's release chain.


If you mean XDE, it's not meant to replace VSCode or Sublime or anything. XDE just gives you buttons for common actions that you would normally do via the CLI.


Android Studio/IntelliJ is an excellent IDE. Kotlin is a great language. There's Instant Run for quick builds.

I've only been doing Android for 9 years. I've seen the development tools evolve and improved the past few years.

But what do I know. You're probably the expert here.


Android Studio/IntelliJ may be excellent to you, and I'd agree the IDE is better than XCode as far as slinging code into an editor is concerned. However the Android development system isn't just the IDE: it's the build process, integration with debugging tools, and the APIs. Java doesn't have to be a bloated mess of over-engineered cruft. I've seen wonderful, clean Java. Android development is the opposite of that.

Kotlin may be great. I haven't used it much, but so far I'm finding it can't hide Android's bloated, over-engineered substructure.


The build process is significantly better in Android world once you actually take a look at how the build process works and need to start doing automated testing and deployment. Xcode tools for actual continious development beyond manual clicking are horrendous and waste a huge amount of people hours.


There's third party solution for iOS build automation - https://fastlane.tools. I didn't tried the whole end-to-end solution, but pieces I tried worked very well.


Unfortunately, the support for managing code-signing in Fastlane isn't quite fully-baked, and it can get messy. It's still really useful, though, if you have to deal with iOS builds.


We used that, but sadly it doesn't help with CI, constant simulator / connected device issues and breakages in toolchains when Apple releases new Xcode :(


> There's Instant Run for quick builds.

Instant Run was buggy and made builds a lot slower, ironically the last time I tried it. That was after them saying that they fixed a bunch of issues so we should give it another chance.

I've been doing Android since 1.0 and started with iOS/Swift half year ago. I think the iOS platform is nicer, simpler, more thought out and overall a better experience, but not by a huge margin. Haven't used Kotlin yet though.

One area where iOS sucks is creating and (sometimes) working with UI.


> But what do I know. You're probably the expert here.

Was it really necessary?


I think it's warranted - the OP wrote an insultingly dismissive post with claims which are downright wrong.

It would be fine if he only criticised the newbie experience (where he's very right), but he also decided to add a bunch of wrong things in there.


I disagree; I think it made the response be entirely condescending and removed any point they had.

For what it's worth, I have about as much experience as they do, and I disagree with them. While the tools have improved, they still have a long way to go.


Maybe that says more about how smart you are than how good the Android ecosystem is.


And? Without fail, whenever there is a post about Android on HN there'll be someone who miserably regales their tourist experience with Android. But that has zero professional relevance to anyone doing anything with any intention or focus at all. Actually it's worse than zero relevance, and is simply misleading noise.

Somehow many of us manage quite fine, and enjoy the experience.

In every other realm that sort of drive-by shooting gets rightly criticized. "Tried Rust -- => everywhere. Lame". "Tried vi. Couldn't quit. Garbage."


Sure, if the only tangible thing that the parent poster said wasn't that he's been on the platform for 9 years. The brain's an amazing thing - it can get used to almost anything, given enough time!


Congrats on making it 9 years. I've only been doing mobile development for 2 years, and using React Native for most of it. So I have an opinion that may be less biased toward a certain toolset.

I'd compare developing on Android very unfavorably to iOS. All the points the OP made are accurate, in my experience. Every time I needed to dive into Android native code, Layout Inflation, etc, I find it to be a crufty and unpleasant system. And gradle is really a pain to use - tons of edge cases and unhelpful error messages. Add to that, you need to support like ten thousand devices, many of which are running Android 4.4 (which, IIRC, is like three years old) and have a WIDE range of screen sizes.

Compare that to iOS development, and the differences seem obvious and apparent to me.


> Add to that, you need to support like ten thousand devices, many of which are running Android 4.4 (which, IIRC, is like three years old) and have a WIDE range of screen sizes.

I've been developing Android apps for years (before support lib even existed, much less all the fancy new kids) and range of screen sizes has literally never once been an actual issue. Use 'dp' instead of 'px' and you pretty much never have a screen diversity problem. All the platform views & layouts handle the majority of the work.

Does React Native make this more complicated for some reason?


Device support is a nightmare from a game development perspective if you're coding in C++ and using the GPU directly. Figuring out why the black box shader compiler is crashing on a particular device is not fun.


Perhaps screen sizes was a bad example of fragmentation (though, because the range is so much wider than on iOS, keyboard avoiding stuff can be unpleasant). Better examples would be subtle device-specific bugs. Like, off the top of my head, touch handling being handled differently for views on the Galaxy S5, making side swiping impossible. I can't confirm if that's RN specific or not, but I do know that I spent tens of hours on Android specific bugs that just were not present on iOS.

And perhaps RN was the culprit for some of these things. But definitely not all.


Same experience here. The API is the same, so what exactly is the problem? There's an occasional device-specific bug, but I usually don't spend much time on those.


Why is that anytime something comes out from Google (or just about anything, for that matter), people have to find something to complain about? There are more engineers at Google than the ones that were working on this project. Just because they are developing this project doesn't mean they stopped development on all other Android related projects.

So why don't we actually discuss the product instead of finding a way to shoehorn in an unrelated topic to complain about?


The problem isn't that someone mentions something else, the problem (from the standpoint of discussion of this story) is that comments like this resonate enough with people to get upvotes. That's actually Google's problem.

Even if you think it resonates because people are just drinking the Apple kool-aid, that's still Google's problem.

Let Google handle its own problems, and use whatever vote power you have to help steer the conversation here. Complaining about it doesn't exactly reduce the signal to noise ratio.


Android and iOS compete in the same ecosystem. It's extremely relevant, in my opinion, to compare and contrast their offerings. The release of a new product, library, or feature is an excellent time especially because of the freshness and potential additional relevance.

And I was talking about the product. ARcore is a single example of the larger, broader problems I "complained" about.


This is the view from the inside as a Google employee. You see all these lines between what you work on and what other people work on. From the outside? People see it quite differently. For an example you can maybe relate to, it's easy to criticize Apple for recent design decisions (touchbar, lack of escape button, etc) and say that the whole company is losing touch with reality. The people working on the iphone might feel the same way you do here: There are more engineers at Apple than the ones working on the touchbar.


> I just wish they'd work on all the other things that make Android development a horrible experience.

Even worse is you're describing the improved Android development experience.


I don't even care for Apple in general. If I were ever to develop these for commercial purposes, however, I'd go Apple all the way and consider Android as a secondary market. This reflects the quality of the development environments and the customer base (in terms of revenue). The only reason I write anything for Android is out of necessity (these are the devices we own).


Funny, because then you'd meet the amateur hour that's Xcode automation and continious integration. As soon as you graduate from a toy app to something you need to maintain for awhile, you find out just in what horrible state the Apple development tools are when it comes to UI testing, building in general and constant breakagaes of code when new iOS is released.

We literaly spent a month of man-hours every year to fix up constant breaking Xcode CI setup and iOS API breakages while Android team continued development unimpeded. Not to mention constant breaking changes and crashes of Swift language and IDE toolchain.

I do agree that for a complete beginner the experience is significantly better in Apple world, but please do not comment on "commercial purposes" development if you haven't done either.


> and constant breakagaes of code when new iOS is released […] We literaly spent a month of man-hours every year to fix up […] iOS API breakages

Unless you're referring to the Swift 2 -> 3 migration process, then I have to seriously question what you're doing that causes so much breakage with iOS version updates. With Obj-C there's usually just a couple of deprecation warnings to handle. With Swift (outside of the 2 -> 3 migration process) there may be a few more updates, due to the Swift SDK wrappers, which may be hard errors instead of merely warnings, but it's still usually pretty easy to fix. And if you are talking about the Swift 2 -> 3 migration, good news, you don't have to do that again!

Which makes me wonder, when you say "iOS API breakages", do you really mean you're using SPI, method swizzling, or subview diving and have to deal with the fact that you're doing something against the rules?


>It's nice to see them finally providing something similar to ARKit

ARKit isn't even out yet though...


You can download the Xcode and iOS betas and try it out right now, on current hardware. And you can trust that all your potential users will have it by the end of the year.


Further more based on the Apple event being on the 12th as expected it will be in production in under a month.


I'd mostly agree, but the one massive pain point I have with Apple (iOS) development is automatability. You seem to need manual steps in the XCode GUI for everything, or at least if you don't, it's not well documented or generally known by the community how to set things up without manual UI steps.

Android's mess of Java cruft may be overkill, but it's at least well-documented text-file-configurable and can 100% be maintained without ever installing Studio.


Which steps? I don't do much mobile development, but I have done some, and I have never used the XCode GUI, ever.


Out of curiosity, what apps did you build for your wife or kids? To solve what problems?


I commented here on Tango 3.5 yrs ago: "remains to be seen if Google can persuade cellphone manufacturers to include 2 special cameras + 2 extra processors in their future devices." Looks like that was the case.

It appears the ARCore API is well designed and 1-1 feature equivalent to ARKit, i.e. VIO + plane estimation + ambient light estimation. The API's even share a lot of names, e.g. Anchor, HitTest, PointCloud, LightEstimate.

Now that stable positional tracking is an OS-level feature on mobile, whole sets of AR techniques are unlocked. At Abound Labs (http://aboundlabs.com), we've been solving dense 3D reconstruction. Other open problems that can be tackled now include: large-scale SLAM, collaborative mapping, semantic scene understanding, dynamic reconstruction.

With Qualcomm's new active depth sensing module, and Apple's PrimeSense waiting in the wings (7 yrs old, and still the best depth camera), the mobile AR field should become very exciting, very fast.


It seems very odd that this comes out and seemingly replaces Tango. Google spent a lot of time going over new stuff with Tango in the latest Google I/O keynote and Google Lens, which was featured quite a bit, seems like it relies on Tango and its depth sensing hardware for their "VPS" stuff.

Also, when Clay Bavor was talking about Tango supported devices he remarked that the devices were getting smaller and smaller then implied it was coming to smaller, more traditional devices. I took this to mean they were close to getting the sensors ready for wide deployment but I suppose this could have just meant they were ditching the sensors because they felt the software was good enough.

I'm kind of disappointed. I'd hoped that he was saying that Tango sensors would show up on Pixel 2 (which was a long shot, from the leaked photos not really showing the many sensors you see on current Tango devices). Instead we have what feels like a rushed out me-too to match ARKit.


This does not replace tango, at least not technically. Tango has a depth camera which is not replaced here. The depth camera is useful for a variety of applications.

I hope tango will be continued to be developed. It is more robust for position tracking, and can do 3d scanning, etc.


> and can do 3d scanning

Very badly. I'm very disappointed by Tango in this regard.



Makes sense, I always assumed the Tango was sort of a codename.

But if they're changing everything over to ARCore then as a consumer how am I supposed to differentiate the phone that supports the software approach from a phone that supports the extra hardware sensors without digging into the specs? There doesn't seem to be a specific label for that.


It was just yet another set of tech presented at IO that ended up not turning into reality.

There were quite a few of them already.


An interesting thing about this is that it could actually benefit ARKit by making it less risky. If you develop an ARKit app now, you can be pretty sure you'll be able to port it to Android in a year or two as ARCore rolls out.


Why would it be risky anyway? Are app developers really counting on device parity between iOS and Android? Are developers not going to release something great because there is a potential Android wouldn’t support it? If feels to me that there is more risk in developing to the lowest common denominator — not creating something useful because of the potentrial it won’t work on Android.


ARKit could be a flash in the pan, I'm thinking. But a) Google is taking it seriously, and b) Android will have something very similar soon.

Edit to add: I think a good comparison would be building an app around iOS force touch, or Macs with the touch bar. Those seem riskier.


It's risky in that you'd be developing a product that can't be "sold" to both markets. If you have two seemingly viable ideas for an app, and one can reach both platforms, while the other only one, you'll likely choose the former.


I'd guess that getting OEMs to install Tango sensors to their phones has proven to be more difficult as they imagined it and that having an AR library that only works on a specialized phones would not be a good idea.

Hence why they're probably switching to the software-only approach where device support can be added more easily.


Also, if software-only AR proves out the use cases and builds consumer demand, OEMs may be more easily persuaded to adopt new hardware for better AR experience.


Looks like a lot of people on this thread think Google's goal is to beat Apple with the features, but in my opinion that's not the case.

Google really has nothing to lose by following iOS lead, it's good that they "gave up" on Tango and decided to follow ARKit because that means Google is not trying to beat iOS with Android, but trying to commoditize iOS.

You really can't beat Apple at its own game, it's best to let go of that foolish goal and focus on trying to nullify whatever leverage Apple has with their few years lead.

Sure ARCore won't be installed on a lot of devices now, but in a couple of years they probably will (This is not the same as the Android ecosystem currently being fragmented because AR provides an entirely new type of UX and will be significant enough for people to get a new phone), and as long as Android gets there Google will have achieved its goal--commoditize AR.

In the end, Apple will have made tons of money with their iDevices, Google will NOT have, but they will have gained enough AR user-base that they can use it as their leverage, everybody wins.


It's funny how much marketing speak these big companies feel obliged to cram in. "At Android scale" -> "to catch up with Apple's ARKit"

It's actually impressive that Google is able to change direction and and get this software-only AR out the door so quickly to compete with Apple, but they still don't want to admit that's what they're doing.


Maybe having been developing an advanced AR platform since 2014 has something to do with the speed at which they can carve out and subset a "light" AR experience :)


Also it looks like Google is retiring the "Tango" brand [1].

https://techcrunch.com/2017/08/29/google-retires-the-tango-b...


Tango has technically been a failure in terms of the specific AR hardware. No adoption and no real software support.

Apple's purchase of MetaIO and its focus on just SLAM is really the right way to go. Maybe improve it a bit via specialized hardware when available (progressive enhancement in a way), but at least start with SLAM.

Google didn't have to be behind on AR at this point in time if they had ditched the focus on Tango hardware and instead focused on SLAM.

But that is water under the bridge, Google is now on the right track after being forced to do so by Apple.


I think ARKit-style SLAM will turn out to be a fad, they'll be a lot of interesting toy apps, but AR without environmental understanding or persistence I don't think offers much beyond that. The basic ARKit demos we've seen are the same stuff we've been seeing for a while now demoed with third party libraries.

Including depth sensing HW was the right solution, but Google doesn't have its own popular smartphone as a forcing function. I predict eventually Apple will include a depth camera, or they'll use dual-cameras to try and synthesize it, and once that happens, then all Android manufacturers will follow suite.

If AR is to be useful, it's got to be a lot better at tracking and drift, at making sense of the world, of supporting occlusion and mapping.


That's interesting. Regarding HW vs SW, I think the exact opposite.

I always thought Kinect Tango and other HW-dependent solutions were a fad, and SW-only on ubiquitous cheap sensors is the right solution. SW keeps getting cheaper, faster than HW does.

I predict that, even if Apple (re)starts the custom-HW fad, within 10 years we'll see devices with good-enough camera-plus-SW AR outnumber specalized-HW AR devices by at least 10:1, and it'll only get more extreme over time.

(Mind you, that prediction requires that we continue to get performance-per-watt-per-dollar improving; i.e. it assumes that Moore's law finds a way to outlive current processor fabrication technologies.)

> If AR is to be useful, it's got to be a lot better at tracking and drift, at making sense of the world, of supporting occlusion and mapping.

That's for sure.


ARKit seems like the near term future. It's useful and no longer requires as much expertise since is available in the OS and is well documented. For a software library it's surprisingly accurate.

The Tango model of using extra hardware is probably much better, but seems further ahead. The software model works today on existing devices and lets people see how this is useful. Once you have that convincing people it's with the money to have the hardware added to the phone becomes easier.

Given how many Android phones are lower cost than the flagship was Tango ever going to be very popular? Apple could have forced the issue (like Lightning or the headphone jack) but people could always switch Android OEMs to get something cheaper if they don't see the value.


Tango's extra hardware was for depth sensing, and low-energy feature tracking, but the basic technique of plane detection from what I understand, is the same technique ARKit uses, which came from Flyby according to one article I read. In 2014 when Tango was released, even Apple HW wasn't powerful enough to run the tracking in software alone.

I have a feeling the end game of this is going to be that Tango-like devices are used for mapping the world, and ARKit like libraries consume the geometry.

That is, not everyone has to own a device with depth sensing. For example, if Streetview-like services using LIDAR, or if self driving cars with LIDAR, map point clouds of most outside areas, and merchants and vendors map into areas with specialized Tango-like HW, then most of the benefits of depth sensing AR can be had for people without depth sensors.

It would be a mostly static 3D map of the world, not frequently updated, but probably good enough to enable a large number of apps.


> but people could always switch Android OEMs to get something cheaper if they don't see the value.

The kind of people that buy flagship Android phones would probably either see the value or be price insensitive enough not to switch over it.

OTOH, “works ok now” is often more important than “works better later”, so getting something out that will work with today's flagships has value even if Tango would be practical down the road.


ARKit is a perfect example of "worse is better". Quite obviously inferior to the full Tango demos with occlusion and room mapping, and HoloLens, but simple enough to excite the imagination, and to enable "fake AR" experiences like Pokemon Go.


That's what I was trying to get at. Once people get a taste I think the demand for More capable solutions like Tango will be much higher than it would have been otherwise.

When Pokémon Go came out I was very disappointed to find it's much hyped 'AR' was really just rendering ok top of a live camera image with no tracking at all.

The demo of the ARKit version from WWDC is what I had been expecting.


"but people could always switch Android OEMs to get something cheaper if they don't see the value."

Not really. If you're someone who values updates, you're kinda stuck paying for a certain tier of device.


I think the android market has proved that there are a lot of users who don't pay attention to that. Otherwise we wouldn't have so many comments and stories about the lack of updates or how long it takes vendor X.

I'm just thinking about your average person who goes into at Best Buy or cell phone store and wants a new android phone. Given to similar phones, one with the extra hardware and the other cheaper, if they don't see the point they'll probably go with the cheaper one.

If they got to use software only AR stuff on their previous phone that may change that decision.


> or they'll use dual-cameras to try and synthesize it

I could be wrong, but I was under the impression they already leverage the dual cameras on the Plus phones.


That's how apples "portrait" mode works and I remember hearing around WWDC that it's also useful in the phones that have it for ARKit.


Doesn't the iPhone 8 have a depth camera?


The package name of the arcore-preview.apk is 'com.google.tango', so that would seem to be the case.


This is something I have been personally pushing the Google AR team on for at least a year and well before ARKit came out. I'm glad to see that ARKit made them actually move on this.

Google had been dead set on pushing Tango hardware to OEMs in the hopes that they would be able to lower BOM on the hardware. Everyone in who has been in AR long enough knew that wasn't going to happen and that monocular SLAM in software was the way forward on mobile.

The key thing now for AR devs is that they will have fairly comparable monoSLAM capabilities available on both Android and iOS for their apps.

HOWEVER that just means that the tracking portion of the equation is solved for developers. A few years ago it was possible to make a cross platform monoSLAM app if you used a handful of tools like Kudan or Metaio. Obviously ARKit and ARCore are going to be more robust with better longevity, however the failure of uptake of AR apps was not because of poor tracking, it was because there is an inherent lack of stickiness with AR use cases on mobile. That is, they are good for short infrequent interactions, but rarely will you need to use the SLAM capabilities of an AR app everyday or even multiple times a week.

This is why I am so invested in WebAR, because you can deploy an AR capability outside of a native app and the infrequent use means it can have longevity and a wider variety of users.

Yes, for those apps that people use all the time it will be very valuable, but if you look at the daily driver apps like FB, IG, Snap etc... they are already building the AR ecosystems into their own SLAM. All this does is lower overhead for them. For the average developer it doesn't solve the biggest problems in AR.

Kudos to Google, but developers need to really understand the AR use cases, implementations and UX if they want to use these to good effect.


Wow, this Dance Tonite [1] [2] thing looks pretty interesting.

[1] https://tonite.dance/

[2] https://www.blog.google/products/google-vr/dance-tonite-ever...


Even with ARCore and the new ML system in Oreo, Google can’t match iOS, due to the install base of Oreo being nothing now, and won’t be over 20% for another 2 years. Apple's ARkit is going to bring a whole new swath of exclusive apps to iOS. These APIs currently can’t be recreated on android, which means most apps wont be able to be ported with all features, if at all. It’s becoming harder and harder for devs to be cross platform and Google is falling behind Apple.


I see it as precisely the opposite. It seems like the tendency to engage in platform wars obscures the larger issue that this is all going to settle down and converge over time and nothing Google or Apple is doing right now will be the final form of AR.

Remember early 3D in the 90s? We have S3 Virge VX, Voodoo 3dfx, PowerVR, Rendition Verite, Matrox, TNT, etc They had a huge disparity in capabilities, fillrates, APIs, most didn't support OpenGL, even 3dfx -- the card closest to what games settled on as a minimum set of functionality, only supported Carmack's miniGL. Early DirectDraw and Direct3D were horrendous and to get performance, Games had to be ported to each card's proprietary APIs, and effectively, Quake and Unreal became the Unity of their day, offering a higher level abstraction to building cross platform titles until the cards all converged on OpenGL.

And converge they did. Eventually most cards offered similar fillrate, multitexturing, and fixed pipeline options, the market settled on a common hardware featureset, and then competed on price and performance.

Later, programmable shaders disrupted the market again, and we went though iterations of pixel/vertex shaders from 1.0/1.1/1.2/1.3/1.4 to 2.0 to 3.0 and then GLSL and finally something like CUDA.

I think we're going to see the same thing happen in mobile and whatever fanboys propose as some kind of insurmountable advantage will turn out to get commodified if it becomes successful. For example, if AR takes off, or if Apple adds a depth sensor and Tango-like functionality takes off and a huge startup market and VC funding coalesces around it, then roughly 1-2 years later, every Asian OEM will have Android devices with depth cameras and similar functionality.

The only reason for the discrepancy today is the hardware fragmentation. But the market follows the money and abhors a vacuum. Hardware convergence in capabilities always follows, and eventually developers end up with middleware to address it.

This does lead to "IOS first" for startups, but if you look at the App Store and Play Store today, practically every major game and app you want is available on both platforms. It'll take years for this to shake out, but if AR becomes huge, smartphones in 5 years will all have roughly a similar set of features.

P.S. My own opinion is that phone's viewport is too small for a great AR experience. It's a nice initial experience and visually impressive, but will quickly become tiring. The long term form of this has to be some form of glasses, because waving around a phone in all directions and holding it in midair while touching the UI is kind of awkward.


> My own opinion is that phone's viewport is too small for a great AR experience. It's a nice initial experience and visually impressive, but will quickly become tiring. The long term form of this has to be some form of glasses, because waving around a phone in all directions and holding it in midair while touching the UI is kind of awkward.

I'm 100% certain that's what Apple is preparing for. AR in a phone is a neat toy, a gimmick. The most perfect AR toolkit ever made still won't change the fact that you're holding a phone in your hand, interacting with it through a screen, etc.


I think you're dead on.

A number of people have speculated that this is an attempt to get some real world trial and have apps that are already ready so if they announce some sort of HoloLens thing in the future the software/devs are already 80% there.


I don't know about the game market in detail, but when I see people indie mobile games 80% of the time they're building for iOS, and saying they'll look at an Android port if the game does well.

Obviously if they built using Unity it should be a simpler port, but clearly not all of them do.


There's plenty of cross platform tools for games now. If you build with Unity or Unreal, you hit multiple platforms at once, including Steam and consoles.

If you're doing a 2D game, theres lots of middleware options as well. Yes, people build for iOS first because you only have to test for a few devices -- its like developing for fixed console HW -- and because iOS users tend to spend more. But most indie games that make money eventually quickly saturate on each platform, and so devs go multiplatform relatively quickly when they make out revenue on a given platform.

The games business is pretty tough if you're not a triple-A breakout hit.


So if you want to play the largest variety of indie games, rather than just the hits you're better off buying an iPhone.


I guess it depends on if you care about early adoption or not or what country you're in. Most of the indie games are available on Android. And if you happen to be in Asia, much of the indie games target Android first.

I just don't think most people care about these issues, both platforms have a huge software library, and both have a huge number of users. I'm kind of tired of people turning technical threads into platform war discussions.

If you're a loyal iOS developer or user, why do you care about what Android does? This announcement is only positive. For Android users, it signals platform level AR support. And for AR fans, it signifies a growing convergence on a low hanging fruit "AR-lite" that will be available to both platforms.

It would be like if only one platform had a Web browser and suddenly the other platform got a browser, and people we're all angry like "Well, platform #1 had a browser first! And it has more websites that optimize for it!"

The real story is "hey, now there are two web browsers, and the web will be larger"

The endgame is AR-lite now has two platforms competing. That's good for users, and good for developers, despite an intermediate period of chaos and fragmentation as things evolve.


No, that's not what he said. You just reiterated your (misguided) post.


> 3D > games > 3d > games > shaders > games > games > games > visually impressive

You are either confusing VR and AR or you have about zero imagination. AR has a ton of legitimate use cases outside gaming:

- https://storify.com/lukew/what-would-augment-reality

- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit


You didn't read what I wrote, I'm talking about the natural progression as to how HW and SW platforms evolve. I use 3D accelerator hardware as an example of initial fragment and divergence followed by convergence.

Where did I write that there's no legitimate use cases for AR? I find games the LEAST PLAUSIBLE use case because holding your phone and moving around a virtual plane is a nice demo, but sucks for extended game sessions.

Indoor navigation and stuff like Google Lens is the most plausible use. And many of the examples you show in your ideas link like showing seat positions, aren't possible in ARKit because it doesn't have persistence.

A lot of the examples in the ideas thread require area understanding like Google's Visual Positioning System, or Tango. If you want a consumer to be able to pop open the camera and instantly have it tell him where his seat is on an airplane, you will have needed to already have stored persistent features of the interior of the plane. (e.g. Tango ADF https://developers.google.com/tango/overview/area-learning)

Look at the Tango app in Lowes (https://www.youtube.com/watch?v=KAQ0y19uEYo) This is the kind of AR that is useful to the majority of people, it's the kind of AR that is a killer app, and it's the kind of AR that won't be available without area understanding capable HW deployed for creating these maps so they can be consumed by cheaper SW-only AR stacks.


> Where did I write that there's no legitimate use cases for AR? I find games the LEAST PLAUSIBLE use case because holding your phone and moving around a virtual plane is a nice demo, sucks for extended game sessions.

AR with an HMD (like HoloLens) changes that a bit. I don't see technically why phone based AR couldn't use something similar to current phone-holder VR HMDs, though a camera accessory with a more optimal position for AR use than a phone camera in a horizontal holder would have might be useful to that.


>new ML system in Oreo

Do you mean Tensorflow Lite? That's not part of Oreo, so shouldn't be something people have to wait 2 years for.


Apple and now Google are making it easier to produce AR apps, but the tech has been there for years. I made my first AR demo some 8 years ago for a big event I was working with (on a laptop).

IMO AR in smartphones and tablets is a fad that in 2 years nobody will care about. Remember all those gyroscope/accelerometer based games? Yeah me neither.

Maybe AR will be awesome when someone (Apple? Microsoft?) releases a pair of lightweight glasses that can produce stereoscopic images superimposed seamlessly over reality, but we are still very, very far away from that.


But AR glasses/implants will be the evolution of smartphones. There won't be a quantum leap from phone SoC to glasses SoC. Google Glass was an early demonstration of that.

So if one needs to evolve AR hardware from phones to glasses, then putting it on the phones is a prudent next step, isn't it?


I agree, and am more optimistic about AR being more than just a fad, but a new type of interface that evolves over different hardware mediums.

AR isn't just a new interface with the user, but also a new interface with the visual environment around the user. This has many more degrees of usefulness than an accelerometer.


> So if one needs to evolve AR hardware from phones to glasses, then putting it on the phones is a prudent next step, isn't it?

Time will tell, but if we are let's say 10 years away from good AR glasses what difference does it make if smartphones of today can display AR content?

Obviously Apple (and now Google) are fighting in the marketing space, not technical one.

In truth the problem is really hardware not software.


I don't think it will take that long to have lightweight additive projection glasses with some sort of camera/depth sensor and eye tracking.

The processing power or battery for that form factor will not be there for another 10 years - but offloading processing and power supply to the phone in your pocket via well designed tethering is conceivable.

I think we'll have this in a couple of years, and you'll control it via voice recognition and hand gestures.


1. https://medium.com/super-ventures-blog/why-is-arkit-better-t...

2. If you think that people will opt for glasses when they don't need them, and they have smartphones with them all the time, you're gravely mistaken (there are also physics which make "lightweight glasses with images superimposed in real time blah blah blah" an impossibility)


>there are also physics which make "lightweight glasses with images superimposed in real time blah blah blah" an impossibility

Could you elaborate here? I'd like to understand the physical limitations.


Au contraire- that tape measure app is the killer application of AR


Sweet! This is amazing and I was hoping this was going to happen sooner rather than later.

How long until they update the ChromiumAR project with support for ARCore and when will that preview and then be available? I know that tons of people are waiting on that:

https://github.com/googlevr/chromium-webar



Thank you! I love Google for making the web a top priority on par with Andriod, where as with Apple it is an unloved step-child.


Chromium-WebAR is a similar project, but backed by Tango, and currently exposing a different experimental Web API than WebARonARCore and WebARonARKit. Soon we hope to update Chromium-WebAR (with a rename to WebARonTango) to have API parity with the ARCore/ARKit-backed browsers

https://github.com/google-ar/WebARonARCore


If someone can make a react native binding for both ARCore and ARKit, that'd be super amazing and make the bar to entry for AR apps much lower.


React native is fairly toxic after the Facebook patent grant fiasco. Imagine you building the next big social engine using react native ..And Facebook stealing all your IP.


I will attempt to make a shitty, hacked-together version :)


That's the spirit! hahaha


Now, someone just needs to build an AR library that abstracts the functionality of these two through a common API.


There will be a few different abstractions.

Google has already abstracted it to WebAR on both iOS (via ARKit) and Andriod (via ARCore) here:

https://developers.google.com/ar/develop/web/getting-started

And of course Unity and Unreal Engine will also act as abstraction engines for native as they tend to do.


It's already out there. https://techcrunch.com/2017/08/28/8th-wall-wants-to-put-awes...

The kit works on Unity and supports both Android and iOS, using the right library on the right phone.


I would be very surprised if Unity didn't have it up and running in one or two months. They already support ARKit, Windows Holographic, and Vuforia more or less natively. Also, given the ground-level work they've done to enable support for VR without directly dealing with vendor-specific plugins, adding just one more is probably not that big of a deal.


Yeah for the time being, unless you are making a showcase piece or exclusive, then an independent third-party cross platform kit like Vuforia is still the best choice currently due to market size. Vuforia works on phones back to 5s and nearly all androids with cams. Though this is more basic AR based on OpenCV that use target tracking, they do have smart terrain and some VR/AR switching capabilities.

Vuforia's mobile kit for iOS/Android/desktop and has been around since Qualcomm made it starting 2012ish. Though they do not have the point cloud tech and ambient lighting tech as far that make ARKit/ARCore a leap. Vuforia will probably not be able to keep up in the long run but is widely available today because it is hardware independent.

With ARKit/ARCore hardware requirements it will take a few years before there is a large enough market to make a mainstream app not just a showcase AR app/game. For this to truly be something that you can get into games that are cross platform, it will take an independent third party like Unity/Unreal/Vuforia/metaio (before Apple bought them) etc to make it mainstream, or an app architecture that can switch out AR depending on device/capability.

I have launched lots of AR apps, mainly games for kids, using Vuforia, OpenCV, metaio and some other kits, primarily where the AR is an extra feature where people play the game on their desk or unlocks from AR targets/trackers on products. Currently it is a nice gimmick without real-world awareness but great for games.

ARKit/ARCore are better than Vuforia but still locked to a platform which will bring challenges until it is abstracted into a common feature to use in Unity/Unreal/WebGL etc. Exciting progress on AR though between Apple and Google and competition like this always benefits.


This looks to ship with Unity support, as well as Unreal.


Yes.

There is a lot of discussion here about how Apple has a headstart on developer commitment with ARkit.

What will actually happen for the majority of games targeting AR is people will write it in Unity (or perhaps Unreal), and then set it to compile for iOS and Android.

The S8 was the top selling Android phone this year, so this can be rolled out immediately to phones. I just tested out the sample app on my Pixel. As time goes on, the percentage of Android phones with this capability will increase.

ARkit does not work on iPhone 6 or earlier, or the iPad Air 2 or earlier. It can roll out to a greater percentage of Apple devices right now, but Android has a larger overall market share any how. Two years from now, I expect AR on iOS and Android to be fairly on par (of course we have to see how the two stacks measure up against one another).


Where did you find the sample app? I've been looking for it with no luck.


I compiled it. I followed the instructions here -

https://developers.google.com/ar/develop/java/getting-starte...

Actually I followed them somewhat - I never opened Android Studio, I did it all on the command line.

Note you have to install two APK's, the ARcore service APK you have to download from them, plus the one you're compiling with gradle or AS.


Unreal or Unity.


Very interesting. Just paging through the docs the library doesn't seem like a very hard to use at all. The devil might be in the details and it's hard to say how rushed it was after ARKit but they had the parts and bits required for it already done in some form or another.


Is there a difference in the core technology underneath ARCore and ARKit? Just generally curious.


ARCore is based on Tango which was derived from Flyby which was acquired by Apple and turned into ARKit.


I thought ARKit was derived from metaio's technology? https://techcrunch.com/2015/05/28/apple-metaio/


Informed conjecture from https://medium.com/super-ventures-blog/why-is-arkit-better-t...:

  Ogmento was founded by my Super Ventures partner Ori Inbar. Ogmento became 
  FlyBy and the team there successfully built a VIO system on IOS leveraging an 
  add-on fish eye camera. This code-base was licenced to Google which became 
  the VIO system for Tango. Apple later bought FlyBy and the same codebase is 
  the core of ARKit VIO.

  ...

  I don’t have any hard insider confirmation on this, but I think the Metaio 
  codebase would have helped with the plane detection and probably helped with 
  the mapping/relocalization pieces of the visual tracker. FlyBy had by far the 
  best Inertial tracker on the market, and it’s this piece that makes ARKit 
  magic (instant stereo convergence & metric scale in particular).



I remember reading that, it was a great post, but it predates this announcement. So it only compared to the tango hardware which doesn't have very many users compared to something in iOS 11.

I'd love to see an updated one that compared the two directly.


I haven't looked closely into either, but in generaly, they're both going to be doing some form of SLAM (Simultaneous Location And Mapping), but the particulars of the algorithms will be proprietary to each vendor. That said, that should largely be an implementation detail. The positioning data provided through their respective APIs will likely be nearly identical.


I ran (the sample app) it on my galaxy s8 and its a bit slow sometimes, but it tracks tables well. Floor not so..

Anybody knows where i can find more apk sample apps to test?


Hi, where is this sample app you tried? I don't see it listed in the OP announcement anywhere...


It can be compiled from the linked GitHub repository.


Looks like they have a few examples here, not all of them have download/source links yet though: https://experiments.withgoogle.com/ar


Also check out: https://venturebeat.com/2017/08/28/8th-wall-raises-2-4-milli...

Supports Unity, and works on both iOS and Android out of the box. (I'm not affiliated, just a supporter.)


Very curious how they pitched this to investors... “We’re building a platform geared to low end devices that will become obsolete within a couple of years. Invest now, and be part of our team’s amazing journey towards acquihire!”


LOL. My guess would be "All ARKit and ARCore bases are belong to us". 8th Wall XR works on all the ARKit and ARCore devices. Why would you want to create an AR app twice when you can create it once? For free.


But why wouldn't I use something proven, like Unreal or Unity in that case?


I'm glad to see this, I'll enjoy experimenting (probably via the aframe ar api) BUT

What are the useful applications for AR outside of verticals?

I've not seen anything compelling in the phone only incarnation.

The headsets have a lot of engineering issues ie many years to overcome.

Even with headsets its unclear the value of adding the visual clutter and noise that most ambient/immersive computing demonstrations seem to assume.

Whatever value you can add generally requires constant headset wear for it to be ready to hand. This puts even harder engineering problems on the industry as it forces super light and easy headsets (google glass was not AR nor a technical path to it).

Not seeing it yet.


Headsets have nothing (or little) to do with AR. You're probably confusing AR and VR.

There are tons of use cases for AR:

- https://storify.com/lukew/what-would-augment-reality

- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit


> Headsets have nothing (or little) to do with AR.

Headsets have a lot to do with AR, which is why HoloLens is a headset; headsets let it be handsfree, use the entire circumambient space, and provide an important and intuitive control for the portion of reality the user is interested in having augmented.

They also allow for stereoscopic 3d, which is useful, though not always essential, for AR.


Having seen most of those proposed in one form or another in the past my response is about the same.

Can't see any of them being worth putting a headset on.

Can't see any of them being worth launching an app on a phone to stare through a camera at.

Some of them need some pretty next level ai.


The heads-up display on a car windshield seems pretty useful.

The rest are pretty underwhelming. Lots of things that make great demos, but not many that feel like I would come back to them.


Do I understand correctly that one big difference between AR on Android vs iOS is that the next iPhone will have advanced 3d sensing abilities that are currently years away on Android phones?


It can be seen from the video (from the way they avoid it) that the "augmentation" is always superimposed on the "reality". I.e. someone can't walk in front of the virtual objects you put on the table.

Is that a limitation of ARKit too?

What would it take to make it "real 3D"?


> It can be seen from the video (from the way they avoid it) that the "augmentation" is always superimposed on the "reality".

I think you mean “inferred” rather than “seen”, if it is an assumption based on avoidance, and there are other explanations; while HoloLens is better equipped than phone-holder software AR to avoid this, the one time I did get to use one there were some glitches when the “augmentation” should be obscured y the “reality”. If ARCore handles that, in principle, but is currently annoyingly glitchy in practice on its current preview-quality state, you might reasonably avoid it in demos.


In live demos yes, but you would totally want to show it in a pre-recorded demo like this one.


Not if it was glitchy enough that you couldn't reliably get a good take.

Either “doesn’t have that feature” or “feature is currently in poor state to demonstrate” are reasons to avoid demoing it.


You don't need to reliably get a good take to produce a video; you need one good take.

If it's bad enough that it won't look right even after taking the best of a large number of attempts, that's as good as the feature not existing for the purpose of my question.


I'm a bit skeptical about the performance to be honest: Great tracking for AR requires careful selection and tuning of cameras and IMUs (inertial measurement units -- essentially MEMS gyro + accelerometer).

Apple has very tight control over their components so they can do this but managing this across a million OEMs and device models (as it is with the Android ecosystem) is close to impossible.

Tango tried to solve the problem by specifying out a software and hardware stack for OEMs to use but now it looks like Google is just too jealous to let have Apple have a good time with ARkit, therefore the "me too".


Hence why it is available only for Pixel (which Google controls) and the Samsung Galaxy 8 (a large partner) because it can be precisely calibrated.

Why is it Google "me too"? Tango was released in 2014.The basic plane detection functionality that's in Tango is derived from the same mechanism that Apple uses. Facebook released an ARKit-like library at their conference before ARKit was even announced.

When Apple is late to the party, it seems people say "it doesn't matter if you're first, Apple waits till its 'ready'", but when Apple is perceived to have done something first, suddenly everyone accuses Apple's competitors of being thieves and copying.


> When Apple is late to the party, it seems people say "it doesn't matter if you're first, Apple waits till its 'ready'"

Generally people say that in response to everyone accusing Apple of copying.


That's mostly in reaction to Apple's history of claiming copying and its litigious look and feel lawsuits.

Remember "Redmond Start Your Copiers!" That was an official WWDC banner hung from the rafters by Apple, not some fanboys. Steve Jobs frequently gave interviews accusing rivals of copying, and then copying with the excuse "Good artists copy; great artists steal". All of those years of going on the offensive against everyone, has created a tendency of the other side to look for hypocrisy now.

There's something pretentious, and deeply hubristic and lacking in humility in Apple's marketing that I think has fanned the flames of these fanboy wars. In a way, their marketing reminds of the way Trump talks, only with a larger vocabulary. Replace "Great!", "Bigly!", "SAD!", with "Beautiful", "Amazing", "Breakthrough". It's just continuous repetitive of superlatives, even for minor features.


Your quotation I think is very important here.

> "Good artists copy; great artists steal"

The point of this quotation is that "good artists" merely copy other people, which is what Apple is talking about when they hang banners like "Redmond Start Your Copiers", but "great artists" 'steal', which means they take the good idea and transform it to make it better. And this is very much how Apple works, and this comes back to your original pseudo-quotation, "it doesn't matter if you're first, Apple waits till its 'ready'". Apple doesn't just blindly copy what other people are doing. They 'steal' the ideas and turn them into something great before releasing them.


Apple has done both, as had Microsoft. Microsoft didn't just blindly copy everything Apple did. As much as I hate Microsoft of the 90s, they did innovate too.

Apple doesn't blindly copy? What do you call Apple Music then? Ping? What did Apple do to innovate in that space above and beyond Spotify? That's a long list of UI features Apple cribbed from Android, WebOS, and Windows Phone that ended up (IMHO), inferior copies, where the copy didn't actually improve on (make great) the original.

In what way does Apple Photos improve on the quality of Google Photo's deep learning based categorization that is user-visible and noticeable?


> Microsoft didn't just blindly copy everything Apple did.

Not everything. "Redmond Start Your Copiers" was in response to some very specific copying over the previous year, though I don't remember the details anymore.

> What do you call Apple Music then?

I call it a subscription model to the iTunes store. Subscription models have been around for a long time, even though Spotify is the poster child for applying them to Music I don't think it makes sense to say Apple is copying Spotify (or anyone else) by having a subscription music service, it's the logical evolution of paid music services. You could certainly argue that Spotify proved the customer demand was there (as well as the ability to convince the labels to go along with this), but it's not like Spotify invented the concept.

> Ping?

An unmitigated disaster. But I'm not sure who you think that was copying. I can't think of any pre-existing service like Ping.

> What did Apple do to innovate in that space above and beyond Spotify?

Provide a seamless "it just works" experience across all Apple devices, including integration into their customers' existing iTunes libraries, and into Siri, including the forthcoming Homepod. And I think it's fair to give Apple Music credit for iCloud Music Library as well, which is great.

> That's a long list of UI features Apple cribbed from Android, WebOS, and Windows Phone that ended up (IMHO), inferior copies, where the copy didn't actually improve on (make great) the original.

Can you elaborate?

> In what way does Apple Photos improve on the quality of Google Photo's deep learning based categorization that is user-visible and noticeable?

Apple Photos does it all on-device.

Also, I really don't think you can claim that using machine learning to classify photos is something that Google owns. It's been an obvious idea for literally decades, it's just taken until now before it was feasible to do.


So when Apple merely copies instead of "stealing", it was an obvious idea that had been around for a long time. It's the logical evolution to what they already had. It just wasn't feasible to do until the moment Apple copied it. Got it.


Your sarcasm is not appreciated nor warranted. If you disagree with any of the specific cases I talked about, feel free to tell me why I'm wrong, but merely being sarcastic about it isn't helpful.


Ok I'll bite:

- If something wasn't feasible to do until the moment Apple copied it from others, why didn't Apple do it first?

- Have there been innovations by companies that compete with Apple?

- Have they ever taken an idea from Apple and made it better?


> If something wasn't feasible to do until the moment Apple copied it from others, why didn't Apple do it first?

I really don't know what you mean by this.

> Have there been innovations by companies that compete with Apple?

Where is this line of questioning going? I feel like you're trying to accuse me of saying that nobody but Apple is capable of innovation, which is nonsense.

> Have they ever taken an idea from Apple and made it better?

I'm sure someone has. I don't really keep track of that sort of thing though, so I don't have any examples off the top of my head.


> > If something wasn't feasible to do until the moment Apple copied it from others, why didn't Apple do it first?

> I really don't know what you mean by this.

You've answered to Apple copying others with "it was an obvious step forward, just infeasible before". Which fails to explain why was it infeasible for Apple until after it became feasible for others.

> I feel like you're trying to accuse me of saying that nobody but Apple is capable of innovation, which is nonsense.

It is a strong predictor of how you responded to the examples that were raised. The obvious question after seeing that is "is there a counterexample?"


I said automatically classifying users' photos to search was only feasible to do recently. It didn't become feasible for others first, it became feasible for everyone at about the same time (well, I suppose it was feasible for Google slightly earlier because Google's doing it in the cloud where they have more computing power available, versus Apple being limited by the computing power of the iPhone, but this appears to be such a relatively small difference that it doesn't really matter).


It was only feasible to do recently, in the sense that only recently did Google develop (and publish) the ML techniques that made it feasible. It wasn't some inevitability brought to us by Moore's law.


This isn't my area of expertise, but I thought the recent ML boom was kicked off by ImageNet, which came from the CS department at Princeton, and the ImageNet Large Scale Visual Recognition Challenge? Google's been a big player in ML recently, but they're not the only ones researching this.


ImageNet is a manually-annotated database to train models. Google had the Image Search corpus internally too.

The advances in image and speech recognition of these times are due to innovations in deep learning by Google, Microsoft, NVidia, Stanford, U. Toronto, CMU, and many others. I didn't mean to imply it was all Google's doing. Rather that Apple wasn't there, which is why "just waiting for it to become feasible" sounds like apologetics.

Then there's also innovation in making a product out of the new capabilities, or integrating them into an existing product. I think dismissing that as "everybody was just waiting for the technology" is not realizing that, in hindsight, all products look obvious.


Take a step back and look at how you explain away everyone else's innovations as if they're nothing. "Subscription models have been around for a long time" Really? That's like saying "Online shopping has been around forever". Amazon clearly did something much better than everyone else, just as Spotify's (and Pandora's radio) subscription music service was a clear innovation on all of the services that came before it and that's why they exploded. Apple essentially adopted their proven business model by leveraging their control of the OS to pre-install and promote their service over a third party, while trying to undercut them because they can avoid their own App Store fee. "but it's not like Spotify invented the concept" Right, they didn't invent "subscriptions", but like Apple, they took a well known idea, and make a very user friendly service out of it.

>Can you elaborate?

http://geeknizer.com/what-apple-copied-webos-android-wp8-bb1...

There's plenty more. The chrome tab switcher one is the most obvious ripoff.

Then you can add phablets, mini-tablets, big screen phones. Apple for years attacked these devices. They put out an actual television commercial saying any phone where your thumb couldn't reach the whole screen was a bad design. Steve Jobs famously said mini-tablets "should come with sandpaper, so users can file down their fingers"

Then they got up on stage and gave a ludicrous political talking point excuse as to why they had not produced a big screen phone prior to the iPhone 6+ by saying that "big screen display tech wasn't ready yet", absurd because Android devices had been shipping high-DPI Retina-quality displays 2 years prior, displays that when evaluated by DisplayMate often ranked as good or better than the small iPhone display.

The reality is, they had an institutional bias against larger and heavier devices, and got caught with their pants down when they found out that many people, especially in Asia, loved giant phones. Prior to that, Apple fan forums were full of people saying racist things like Asian hands are too small for big phones.

Or how about multi-tasking windows? They criticized that, and then copied Windows Mobile's split-screen snapping identically for the iPad Pro, it was so obvious that Walt Mossberg instantly gasped and called it out during their presentation.

>Apple Photos does it all on-device.

Does it worse, costs more, and the majority of users aren't aware, and don't care. "It's been an obvious idea for literally decades, it's just taken until now before it was feasible to do." Right, everyone else's idea is either obvious, or the innovation to implement it doesn't matter at all. Leaving aside that what you call "feasibility" wasn't just about compute power, but also required conceptual leaps to improve quality as well as ImageNet. It wasn't until CNNs and ImageNet, that the leap was made and quality started to get dramatically better to the point that it was worth shipping to users. And it wasn't until Google shipped it at scale and got industry applause for making photo management dramatically easier and hassle free, that suddenly they had to fast follow.

This is the problem with arguing with Apple loyalists. The constant hagiography. I work for Google, and I criticize the hell out of the failures and missteps Google makes. It is not perfect and I have no problem pointing out it out.

But it seems arguing with Apple fanboys is like arguing with a political pundit on a Cable news network. Their goal is to defend the image of their target no matter what.

Apple copies stuff. Sometimes they innovate, sometimes they just copy what is already proven to be a success without anything but superficial changes.

Going back to my original point, the constant minimization of other people's innovation and achievements, and the boosting of every Apple announced feature as if it's a breakthrough innovation, rubs people the wrong way, and that's why you see people pointing out hypocrisy.

I've been in this industry since the 80s. The marvelous iPhone you hold in your hand today is also the result of a long litany of achievements of others, Apple stands on the shoulders of giants.


Spotify was certainly well-executed. But I don't see how that claim leads to saying that anyone else doing a music subscription service is copying Spotify. You're conflating "doing something well" with "inventing something".

In addition, Apple Music is not just a clone of Spotify. Besides the integration I talked about before, there's also the focus on human curation along with Beats 1. AIUI Spotify relies on algorithms to put together playlists (I'm not a Spotify customer). Apple took the core concept, of a music subscription service, and produced their own unique take on it. Even if you think making a music subscription service is copying (and I stand by my claim that it's an obvious evolution of the music store concept), you have to admit that this falls in the "great artists steal" side of things.

> Multitasking / Task Switcher – Copied from WebOS

Apple's task switcher looks like CoverFlow, which is something they invented many years ago. Unless you're arguing that WebOS owns the concept of seeing the apps that you can switch to, but I don't buy that (besides the fact that seeing the apps you're switching to is a reasonably obvious thing to do once you have the computing resources for it, didn't Windows Vista introduce basically this exact same thing many years ago?).

> Calendar – Copied from Sunrise

I literally can't understand what the description for this one is trying to describe.

> iTunes Radio

iTunes has had radio support since forever, so I don't understand this claim.

> Back Navigation – Copied from BlackBerry OS 10

I've never used a BlackBerry, but after some searching around, it appears that BlackBerry's horizontal swipe gesture is not a back navigation gesture, so this claim is bogus.

> Notification Center / Toggles – Copied from Android, Samsung TouchWhiz

My recollection is that notification / control center was actually based on stuff people were doing with jailbreak.

> Lock Screen – Copied form Android, WP8

Haha what, are you serious? iPhone invented the slide-to-unlock (Apple even had a patent on it!), it's the other OS's that copied Apple here.

--

> Then you can add phablets, mini-tablets, big screen phones. Apple for years attacked these devices.

What does that have to do with this conversation? Apple's pretty famous for saying they'll never do something right up until they do it. Remember the iPod Video?

But the point is, customer preferences changed over time. It used to be that everybody loved small devices (and many people still do). If Apple had introduced a 5.5" device back in, say, 2008, it would have been a humongous flop. So I don't understand what you're trying to say here, beyond acknowledging the fact that Apple recognized that customer demand for larger devices increased to the point where it made sense for Apple to expand their product line to meet that demand.

> Or how about multi-tasking windows? They criticized that, and then copied Windows Mobile's split-screen snapping identically for the iPad Pro

I am not very familiar with Windows Mobile (or even aware that it had split-screen snapping) so I can't comment here, except to question your "they criticized that" comment. When did Apple publicly comment about split-screen multitasking prior to adding support for it?

> Does it worse, costs more, and the majority of users aren't aware, and don't care.

Apple Photos is free.

As for "don't care", this is nonsense. Not only are you underestimating the number of people who go with Apple because they know Apple respects their privacy, you're also implying "it's ok to violate user privacy as long as they don't know you're doing it", which is a pretty shocking attitude.

> Right, everyone else's idea is either obvious

Oh give me a break. It's not my fault if you're picking obvious ideas to try and argue are unique. Or are you really going to try and argue that classifying and searching images isn't something people have wanted to do for decades?

Besides, most core ideas are obvious, and it's the details and execution that matters.

> And it wasn't until Google shipped it at scale and got industry applause for making photo management dramatically easier and hassle free, that suddenly they had to fast follow.

You really don't think Apple was looking into doing image classification and searching before Google released theirs? Just because Apple doesn't talk about upcoming plans doesn't mean they weren't already working on this. I'm very skeptical that this was a "fast follow" to copy Google, especially because Apple Photos gained this capability at the same time that the entire OS was upgraded with machine learning in all sorts of places. This is most assuredly just a case of it became feasible to do, so both Google and Apple did it at the same time.

> This is the problem with arguing with Apple loyalists. The constant hagiography.

This is the problem with arguing with Apple haters. Their insistence on retreating into accusing Apple users of cult-like or religious behavior to avoid having to actually construct good arguments.

> But it seems arguing with Apple fanboys is like arguing with a political pundit on a Cable news network. Their goal is to defend the image of their target no matter what.

Insulting the people you're arguing with is a terrible idea. Accusing people of being fanboys because they disagree with you is incredibly close-minded and offensive, and demonstrates that you're not arguing in good faith.

> Apple copies stuff.

Uh, yes, they copy things and make them better. That's was the foundation of this entire thread. You even quoted Steve Jobs saying "good artists copy, great artists steal". The argument here is that you're accusing them of "blindly copy[ing]", as well as implying that the companies they've "copied" are the sole inventors of the concept without any recognition that most of these core ideas have been around for a long time. Apple is literally famous for taking things that other companies have demonstrated the viability of and doing it better.

If you really want to accuse Apple of blindly copying, I'm surprised you haven't mentioned Watson/Sherlock. That's by far the most famous case of Apple directly cloning someone else (and is notable in part because this isn't a case where Apple improved on the original, they merely copied it).


> Apple Photos is free.

Free for 5Gb, $120/yr for 2TB. Google Photos offers unlimited storage for up to 16 Mpix.

As to your other stuff, I don't see the point in continuing since it's so subjective. On any feature which Apple clearly copied and improved, your answer is "whataboutism" to trace the lineage and claim long long ago, the seeds of it came from somewhere else. History for you is a long line of incremental, but uninteresting improvements, punctuated by Apple's adoption, which somehow defines a clearly new thing. When Apple makes an improvement, it is not diminished by the fact that other people had worked on something similar and that it's lineage can be traced as accumulated incremental improvements.

But somehow, for everyone else, history diminishes their achievement.

For me, the history of product development is one of hundreds or thousands of actors, researchers, and scientists making publishing incremental improvements that stack. People who diminish the work of others, try to hog all the credit by rebranding the achievements of others, and fail to give back are parasites.

And for the record, I am not an Apple hater. I have a maxed out Mac Pro on my desk, I've owned every Apple product you can think of, including standing in line for one of the very first iPhones sold. I have carried almost nothing but iPhones, and my primary mobile computer is a Macbook pro with touch bar.

CRITICIZING a company's magical reality marketing and the inability of its cult-like fans to admit criticism of it, does not amount to hate.

I love Apple products. I hate the way they are marketed.


> Free for 5Gb, $120/yr for 2TB. Google Photos offers unlimited storage for up to 16 Mpix.

No, Apple Photos is free. It's an app that's bundled with macOS and iOS. You don't pay for it. The prices you just listed appear to be iCloud storage. But you don't need iCloud Photo Library to take advantage of searching your images (because, remember, it's all local).

> As to your other stuff, I don't see the point in continuing since it's so subjective. On any feature which Apple clearly copied and improved, your answer is "whataboutism" to trace the lineage and claim long long ago, the seeds of it came from somewhere else. History for you is a long line of incremental, but uninteresting improvements, punctuated by Apple's adoption, which somehow defines a clearly new thing. When Apple makes an improvement, it is not diminished by the fact that other people had worked on something similar and that it's lineage can be traced as accumulated incremental improvements.

Is that really how you're reading this? No wonder your comments have been so off base.

I never claimed that Apple's improvements were somehow creating a "clearly new thing". You've made that up out of whole cloth. I said Apple improves stuff when they copy, yes, but that just means it's an improved version. By and large, Apple's versions of things are just improvements over previous iterations of the thing. Sometimes Apple does create uniquely brand new stuff, but those times aren't what we've been talking about.

But no, you have this narrative of Apple users being cult-like and as a result you're putting words into my mouth.

> People who diminish the work of others, try to hog all the credit by rebranding the achievements of others, and fail to give back are parasites.

You mean like people who try to claim that Apple isn't doing anything other than "blindly copy[ing]", dismissing the improvements Apple made to their implementations and trying to give all the credit to previous iterations, while meanwhile dismissing the idea that the previous iterations themselves didn't have a long history to draw from (such as the decades of machine learning research that happened before both Google and Apple were able to introduce photo library search)?

> And for the record, I am not an Apple hater

Then why did you reach for the cult metaphor and the "fanboy" label?


>No, Apple Photos is free. It's an app that's bundled with macOS and iOS. You don't pay for it. The prices you just listed appear to be iCloud storage. But you don't need iCloud Photo Library to take advantage of searching your images (because, remember, it's all local).

The whole point of Google Photos is that it is a a cloud backup service that indexes a lifetime of photos and manages them for you. Google Photos without the free, unlimited storage misses the whole point of liberating you from having to ever worry about managing photos again.

Saying Apple Photos, the app, is free is like saying email clients are free. Sure, you could store a lifetime of email on your phone, but the original value proposition of Gmail was that they gave you so much free storage at a time when services like Yahoo and Hotmail! charged you for more than 25mb. Most of the value is managing the storage for you, worry free.

Don't give you users shitwork, and having to manage your phone's storage, sweat over metered storage, and delete stuff to free up space, is needless, janitorial, shitwork. That's why the web is so amazing, because I don't care about what websites I visit, since I don't have to manage what's in my browser's cache, it is purged automatically if unused, and anything important I do online is persisted in the cloud.

Storage management is annoying and anti-user, and Google Photos is about peace of mind.


This is a weird tangent, but I'll bite.

The whole point of Google Photos is that it convinces everybody to give all of their photos to Google, so Google can data-mine them.

Saying Google Photos is free is like saying that Gmail is free. Sure, you're not paying for the service itself, but you're paying with your privacy and access to the most intimate details of your personal life.

--

In any case, discussions about online storage of photos isn't relevant at all to this thread, especially when the context of Photos was in searching them, which has nothing to do with how they're stored or whether they're synced.


"That's mostly in reaction to Apple's history of claiming copying and its litigious look and feel lawsuits."

Which is generally a response to itself being sued constantly over the iPod.


The look-and-feel lawsuit happened 7 years before the iPod (https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Micros...)

Apple tried to ban other companies from using GUI, claiming ownership of the concept. Of course, they themselves got the idea from Xerox, which got the idea from SRI/Englebart, which got the idea from Sketchpad (Ian Sutherland), which got the idea from Memex/Vannevar Bush all the way back from 1945. There's a continuous lineage of which Apple was a part of and clearly pushed the field further from military and corporate R&D to consumer, but did they deserve to own a monopoly on it?


Google took the names of the 2 best features of iOS 11 and combined them: ARKit + coreML = ARCore

Anyone else here got it?!


Exciting to see competition here


My real table is messy enough. Now I can make it messy digitally too!


Will this work on both Qualcomm and Exynos variants of the S8?


What’s with the majority of the shots being crop shots, or shots that don’t involve the object moving completely in or out of the frame? Seems to me like it’s potentially hiding some visual defects


I can't wait to dance with a hotdog.


How does this compare to apple`s ARKit?


Hopefully someone will make a good write up.

From seeing ARKit examples that people have posted to Twitter the thing that has impressed me the most is the ability of ARKit to track your position even if you turn around and walk around, even all around in the office. I hope Google's version can do that as well because it seems like it would enable some really fun activities.


I don't get if there's app one can download to try this?

Doesn't look like it, uh?


There's a sample you can download and run on compatible hardware: https://github.com/google-ar/arcore-android-sdk/tree/master/...


Thanks.


I'd rather they rewrote Camera2 API, which is the most horrible API I've seen in my 20+ years in this profession. It's so bad, one might think it's an elaborate prank, but no, Google does expect you to use it to interact with cameras. That's why all photo apps on Android are so ridiculously bad compared to iOS.


Yep, it's horrible. Activity + Fragment API is pretty bad too. And the worst was the first version of in-app billing or GCM (don't remember which one) API. You had to copy hundreds lines of code for a hello world.


Google's ARKit


Haha, check out the commits on their github for three.ar.js: https://github.com/google-ar/three.ar.js/commits/master

> Build and increment to 0.1.1

> jsantell committed 26 minutes ago (failed)

....

> Fix linting

> jsantell committed 24 minutes ago (success)

edit: aww come on folks it's all in good fun


release days are fun


"Nazi" linting rules are not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: