Hacker News new | past | comments | ask | show | jobs | submit login

Eh, they've both got their pros and cons.

iOS:

+ Swift

+ Less screen sizes to worry about

+ Less iOS versions to worry about

+ XCode is much lighter on resources

- Mac only

- XCode might crash every now and then

- Probably need an iOS device, as the simulator is very slow

- $100/yr developer fee

Android:

+ Kotlin

+ Studio runs everywhere

+ No developer fee

+ More stable IDE

+ Decent emulation

- Countless screen sizes to worry about

- Lots of Android versions to worry about

- $25 developer account fee

- Android Studio is resource heavy

- NDK requires lots of JNI boilerplate

Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it will probably be very game-y and such.




> - Probably need an iOS device, as the emulator is very slow

Only if you're doing OpenGL, as that's actually rendered in software (not sure about if you're using Metal). Otherwise, the simulator (not emulator!) is fast, which is as it should be as it's running native code (which is why it's a simulator, not an emulator). If anything, it's important to test on devices because the simulator can mask performance problems (though as devices become more and more powerful that becomes less of an issue).


AFAIK, there is no hardware acceleration whatsoever for graphics in the simulator, it's all software rendered.


I'm sure you're correct, but that seems backwards. OpenGL is a cross platform API, why would that be emulated in software? And the iOS devices have a different CPU architecture (ARM) than the MacBook Pro (x64) you develop on, so how is that being run as native code?


I can only speculate as to why OpenGL is rendered in software, I assume it has to do with the fact that the graphics capabilities are different on iOS versus Macs, and perhaps software rendering is needed to ensure consistent behavior or implementation of OpenGL extensions (though I'm not positive that the simulator offers the same OpenGL extensions anyway; I'm not a graphics programmer so I haven't really explored this).

As for it being a simulator, it's because your app actually compiles to x86_64 code when you're targeting the simulator. When you switch between targeting the simulator and targeting a device, your app is recompiled for the new architecture. And the simulator includes a complete copy of iOS and all its frameworks (but without most of the built-in apps) that were compiled for x86_64 in order to run in the simulator.


OpenGL is not being emulated, it's being implemented. OpenGL is just the API and specifications, it's up to individual graphics hardware vendors to put a conforming OpenGL implementation in their driver. Ideally it would all be done very fast in hardware, but there are still times when a particular feature can't be done on certain hardware, and is performed in software to conform to the OpenGL spec.

Apple already has a complete software OpenGL implementation, which they may have modified to simulate the individual OpenGL ES implementations for each iOS device. This also has the advantage of removing the developer's hardware from the equation: If they want to test a bleeding-edge OpenGL ES app on a really old MacBook, it'll run - just slowly.


When you compile for the simulator, the binary output is x64.


Anyone that uses the excuse that there are "Countless screen sizes to worry about" makes me wonder if they even know what they're talking about when it comes to Android development. The developer of Pocket Casts says this is a fallacy:

https://rustyshelf.org/2014/07/08/the-android-screen-fragmen...


To whatever degree it was true it was a lot MORE true when you only had to worry about the original iPhone screen, or the original plus right now, or maybe the five.

It's no longer just one or two sizes on iOS. Probably a minimum of three assuming you don't want to do a tablet app, and that may change into weeks.


I'd also add: NDK uses several different flavors of libc++, each screwed up in its own unique way. Apple uses a bog standard libc++.


It provides several different flavors of the C++ STL, including libc++. `libc++` is specifically the implementation shipped by LLVM/Clang (which is also what Apple uses).

The next release of the NDK (r16; due out sometime this quarter) will stabilize libc++ for use by NDK applications and it'll be made the default STL in r17 (which will be out by the end of the year).


That doesn't change the fact that C++/NDK development has been busted for years, for no apparent reason other than Google not paying attention.


If you check the ARCore announcement, "ARCore works with Java/OpenGL, Unity and Unreal and focuses on three things:", shows what the position of the NDK is.

Nothing more than a way to implement Java native methods, which I completely ok with, given the security implications.

Just wished that since they have their own fork of the Java world, that they would also bother to provide something else besides forcing us to manually write JNI calls.


>Countless screen sizes to worry about

I think you are taking it the wrong way and you are not at fault, there is a lot of FUD about screen sizes.

Designing for flexible screen sizes and densities is pretty easy and I don't think I would gain any significant amount of time if Android was limited to 10 screen sizes.

You just think in term of dimension independant pixels & inflexion points where you adapt your design (one more row, multiple panes, etc)


Which really you're supposed to be doing on both OSs at this point. As an iPhone user there are still apps that don't support the 6 size screen and it's obnoxious.

iOS has four different retina screen resolutions on the phones, more on the tablets. For all we know there will be more in two weeks. Creating pixel perfect layout doesn't work very well anymore.

It may be easier to exhaustively test on iOS because there aren't quite as many variations, but devs should definitely be using flexible layouts.


Completely agree, I only mentioned Android because of the 'screen size hell' myth.

Another argument is that you want to future proof your app.

Flexible layouts all the way.


And then of course there's accessibility. If your app is already designed to handle different screen sizes then it's easier to resize various elements because you want to bigger or smaller text.


Android:

- Developer fee of $25 if you plan to actually publish to the store

- Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

- NDK requires lots of JNI boilerplate to call about 80% of Android APIs


> - Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

I'd disagree with the Xeon bit, I have a 6 year old Sandy Bridge quad core, and Android Studio runs butter smooth.

I'll confess to the 16GB of RAM and an SSD though. Although honestly an SSD now days is required for anything to be usable.

Android Studio is amazingly performant though, the Emulator is great, ignoring bugs and glitches and the occasional times it just stops working until I flip enough settings back and forth that it starts working again.

Of course a huge benefit is that I don't need Apple hardware to develop for Android.


The main issue I have seen is that people don't know how to configure Android Studio & Gradle memory consumption.

Granted, they should not have to do that in the first place but once done correctly, it makes AS fly even on very large projects.


I also rather develop for Android, but Android Studio resource requirements made me appreciate Eclipse again.

Apparently AS 3.0 will be better on that regard.


> I also rather develop for Android, but Android Studio resource requirements made me appreciate Eclipse again.

There is a reason my dev machine is a Desktop. Better keyboard, better monitor, better performance. 6 year old machine, cost about $1500, performs better than the ultraportables a lot of people try to press into service for writing code. Even with a faster CPU, thermal throttling is a concern once the form factor gets to a certain size.


We don't get to chose what we get.

Usually the customer's IT assigns hardware to external consultants.


Ah interesting, when my team used external consultants, we did the inverse, we gave the consulting company a beefy requirements list and told them anyone sent to work for us must be at least that well equipped.

Paying by the hour, we were heavily motivated to minimize compile times. :)


>Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode

I've had no problems using Android Studio on my Mac with 8GB. On a side note, the Android emulator even started faster than the iOS simulator. I also found it odd that the Android Emulator seemed to consume less resources than the iOS simulator which was taking up about 2GB of RAM.


Thanks, it's a one-time payment so I completely forgot about Android's developer account fee. And I'll add the other points too.

Edit: Also, how does Xcode compare in performance? It seems lighter, but I only have a pretty recent Macbook Pro to test on (which also handles Android Studio just fine)


> Also, how does Xcode compare in performance? It seems lighter, but I only have a pretty recent Macbook Pro to test on (which also handles Android Studio just fine)

Depends on what you open with it - for ObjC it's usually faster and smoother, for Swift it tends to be slower (about on par with AS Kotlin plugin) and for C++/ObjC++ it's horribly slow just like any other IDE out there :/


Much lighter, the iMacs at the office still handle it.


> Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it will probably be very game-y and such.

You are confusing VR and AR. AR has a ton of legitimate use cases outside gaming:

- https://storify.com/lukew/what-would-augment-reality

- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit


I read that as AR development having more in common with game development, which makes sense given the emphasis on performance, 3D rendering, and latency.


Edit: Ninja'd by Crespyl :P

Yeah, I should have said it could benefit from using a fully-fledged game engine, considering you'll likely need to import 3D models, have objects interact with each other, and so on which Unity would help a lot with.

However, I primarily do experimental work in those engines, so I'm pretty biased.


VR also has a ton of non-game use cases. This is one of the biggest issues with VR marketing. But yes, as other commenters have mentioned, AR development still benefits from using a game engine (maybe rebranding 3D application framework would make it more palatable to "serious app developers") because you probably want do work with 3D models\rendering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: