Android or iPhone? Which platform should I target first? iPhone, for now, though it’s close. Android cannot (yet) provide the same level of developer tools or paying user base, the Android emulator can’t handle graphics, and I already own an iPhone. I do expect that Android’s competitive position will improve significantly over the next few years, but it’s not there yet.
- iPhones are all (almost) exactly the same. 4th generation iPhones do support an optional higher screen resolution than previous models, but they’re 100% binary backwards compatible. This is a trivial difference when put up against the dozens of different Android devices available with radically different hardware, and the hoops that you as a developer have to jump through to make sure your app runs correctly on all of them.
- Large, high-end user base that demonstrably spends money on apps. iPhone users have already shown their willingness to buy large numbers of apps; Android users, not so much. This seems to be slowly changing over time as the Android app store matures, but for now, iPhone is still the cash cow.
- Beautiful and well-designed UI and system architecture. iPhones have been designed as an integrated and coherent experience. Apple’s renowned mastery of intuitive human-computer interaction is evident at every step while using the device. There is an overall (and well documented) unifying strategic vision of how different subsystems should fit together seamlessly and present a unified interface to the user. Their development libraries have been designed to support this theme at every level. Android, by contrast, reads much more like KDE or Gnome; it was bolted together out of disparate pieces written by different groups working in isolation with minimal coordination. Perceived technical coolness from the perspective of a software engineer who is already intimately familiar with the system’s internals took precedent over usability. Android’s user interface is generally much uglier and much less intuitive.
- iPhones support low latency audio. Android uses ALSA internally, which does support low-latency audio, but the low-latency API is not exposed to developers, and Google does not seem to be interested in fixing this. Android latency is unacceptably high for realtime sound synthesis unless you hack it and use the ALSA API, which is officially forbidden and in any case could change significantly and without warning between Android operating system updates and between phones. iPhone, on the other hand, has an officially supported low latency audio API, and there are many successful realtime sound synthesizer apps in the app store.
- XCode. XCode is a really well done IDE. I normally don’t like IDEs. I normally use Emacs for everything. But, in all fairness, I decided to give XCode and Eclipse with the Android plugins a shot. Eclipse is nice, but a little rough around the edges. I ran into a few bizarre IDE error messages while developing my Android test projects that required extensive Googling and hours of experimentation to unravel. XCode, as one would expect from a flagship Apple product, is immaculately polished and completely integrated from top to bottom.
- Objective C. While Objective C was really cool and innovative in 1988, today it seems archaic. Nobody uses it except Apple. It’s laden with tons of cruft. It still feels like it’s a bolt-on afterthought preprocessor for a C compiler. For example:
- Selectors: you can’t refer to method names directly; you have to use the special
@selector()compiler directive to look up their index number. This probably made great sense when CPUs were very slow and doing a lookup in an ASCII table was a significant performance hit over doing an integer offset jump, but on modern computers, this is just annoying. CPU time is cheaper than engineer time.
Manual reference-counted memory management is an obsolete and extremely fragile pain in the ass. Lisp had GC ~40 years ago. The vast set of problems introduced by manual memory management, such as buffer overflows and memory leaks, is just not worth the small gain in efficiency that you get from not periodically running a garbage collector. Again, CPU time is cheaper than engineer time. Android does GC on a phone just fine, so I don’t buy the argument that a phone can’t support that kind of overhead. Garbage collection was recently introduced for the Mac OS X desktop system, but is as of yet not available on the iPhone.
(I’m aware that C and C++ are widely used and both still make you do manual memory management, too. Far from regarding this as a vehicle for a macho display of programming prowess, I find manual memory management to be more of an annoyance, something that the computer should be handling for me automatically, much like how figuring out the exact memory address of a function was automated long ago and is no longer something we as application-level programmers ever need to worry about much.)
- The use of
-to mark methods as class or instance methods. Principle of least surprise; why not just use, say, the English word
staticor something to that effect to designate class methods, and leave instance methods unmarked? Perhaps using 1 character symbols rather than entire English words as the token improved compile times on large, complex projects in the ’80s, but the time savings is infinitesimal on a modern computer, so I’ll vote for obviousness and clarity rather than crypticity.
- The use of  (square brackets) for message passing, and the requirement for named (but non-reorderable) arguments.
Perhaps this was done out of a well-intentioned effort to make the programmer notice that these are messages and not function calls or C++ methods, and to make the code self-documenting. I find the introduction of new syntax rather unwarranted and distracting. I’d much prefer the familiar
object.message(argument1, argument2)over the verbose and unwieldly
[object message:argument1 parameterName:argument2]. You can’t even re-order the named arguments, so including the argument name is just noise in the code. I guess I’ve been spoiled by Clojure and Lisp, where there’s almost no syntax.
If they really felt strongly about the need to pedantically differentiate messages from function calls or methods at the syntax level, they should at least have retained the general function/method syntactic template and chosen a different separator character, so as to enable programmers to re-use most of their neural syntax parsing circuitry rather than force us to introduce an entirely new syntactic template that performs only 1 task. For example, a more intuitive syntax that still emphasizes the fact that this is not a function or method call could be something like
- Selectors: you can’t refer to method names directly; you have to use the special
- Closed platform. The source code to the vast Cocoa library is proprietary. You can develop with things Apple tells you you may develop with, and nothing else. They only recently relaxed the restriction on using any third party tools at all. The iPhone's CPU is widely reported to have built-in hardware level Java bytecode support, but that's off-limits.
- Vendor lock-in. This is a huge and obvious minus for the iPhone. Your ability to generate revenue is 100% dependent upon Apple's whims, upon remaining in Apple's good graces. Uncountable horror stories circulate online about developers who invested months and many thousands of dollars into a project only to have it rejected by Apple, often after a considerable delay and with little or no explanation.
- Java (almost). Android apps are written in Java. (Well, technically not in Java due to licensing issues, but in a language that is 99.9% the same as Java.) If you already know Java, as I do, this is a big plus. You can easily re-use your code elsewhere, on any platform, with minor or no modifications. Objective C code is rather limited in what you can do with it: you can re-use it in your Apple desktop apps. (Yes, there is a free GNU Objective C compiler, but almost all real-world Objective C code is locked into Apple's proprietary Cocoa runtime system.)
- A JVM, almost. Android doesn't run a true JVM, unfortunately, as part of a licensing dispute between Google and Sun, but it's close. Android's Dalvik is a cleanroom reimplementation of part of Java, designed to avoid triggering what Google regards as onerous licensing fees for the use of a true JVM on a mobile handset. In any case, it's close enough that the door is open to potentially running Clojure and other JVM languages on it someday.
- No upfront developer fees. Develop for free. You can download the SDK and go nuts, upload your code to a real handset to test, send your apps to the app store for sale, whatever. There is no $99 developer tax.
- Larger ultimate user base. Android phones are now outselling iPhones, and this trend looks set to continue. The price you pay is coding for 200 slightly different handsets.
- Open source platform. Most of the Android system is freely available, precluding vendor lock-in, at least in theory.
- Android emulator lacks hardware accelerated graphics. This is a big minus. I made a very simple bouncing-ball demo with Rokon that got about 6 or 7 fps in the emulator. Full games often take multiple seconds per frame, completely unusable. If you want to write games (or anything else where graphical performance is key), you absolutely need to have a hardware handset from the start. If you don't already own an Android phone anyway, this means you have to invest hundreds of dollars upfront, just to see if the platform is for you.
- Lack of interest from Google. Google has many, many pots cooking at once. While they do devote significant resources to Android's maintenance and expansion, it's nowhere near as important a product for them as the iPhone is for Apple, and you can tell. The iPhone is a core strategic product for Apple, whereas Android is a relatively minor satellite project for Google. Correspondingly, Apple's general support of the iPhone and its developers seems far better.
- Relatively few libraries. For the moment, there seem to be more third party libraries available for the iPhone, and they seem to be more mature and robust than what's out there for Android. This will change in time, as more and more app developers begin to use Android.
It was a close call. I did some exploratory coding on both platforms — there's no substitute for direct experience. Though I much prefer Java over Objective C as a language, the universe of extant Android libraries is much smaller and less mature. I may have been able to live with that in order to be able to code in Java and avoid Apple's vendor lock-in. In the end, the tipping point was practical: the Android emulator's lack of hardware accelerated graphics left me with an unusable framerate, so I'd have to spend hundreds of dollars on a handset to do even trivial demo development, whereas I already own an iPhone.
I'll be checking back in on Android periodically, as I expect its long-term position will improve substantially in time, but for now, I'm going with the iPhone.