I actually like the way nil is treated. I'm not a big fan of writing code with guards all the time like this:
// As in C#/Java
if (obj != null) {
obj.doSomething();
}
At least in javascript it's easier with the && operator:
str && str.doSomething();
In Obj-C you can just:
[str doSomething]; // If you really care if it happened or not, then introduce a check for nil
Once you get used to the idea that nil doesn't break your code, you start writing your methods accordingly.
For me, a lot of the stuff you write about (ARC, Blocks, literals) are the great improvements to the language.
Before ARC you had C/C++ like control of the memory and you could do some crazy stuff like an object holding on to itself and then committing suicide (pretty useful actually). With ARC you only have to care about retain cycles and declaring weak/strong pointers. It's an awesome feature for a non-GC environment.
Blocks are truly the biggest change in Obj-C and made the code much easier to develop. Any async task is now easy to deal with. Alongside GCD, multi-threading is made simple.
What I'm really trying to say is that from the first iPhone OS to iOS7, we've seen some major improvements to the language. If it continues at the same pace, you'll might like it more in your next year's review.
nil is a safety/convenience trade-off -- it can be dang convenient, but it's just begging for hard-to-spot bugs to appear in your code. To use your example:
> [str doSomething]; // If you really care if it happened or not, then introduce a check for nil
Usually I actually care if the code I wrote did something! Writing nil checks everywhere would be non-idomatic and really annoying. (Something like Haskell gives you the best of both worlds, arguably.)
I agree the language has gotten much better. I've worked on two ObjC iPhone projects, one before ARC and blocks, and one after. The impact of blocks, especially, is incredible. So, so, so much nicer. (Also object literals are a nice recent addition.)
That said, the criticisms voiced in this article, which I agree with and think are real problems, are unlikely to go away -- they're just too closely tied up in the core of what objc is. (You're never going to have a nil that throws or a type-safe containers in objc, I'll predict right now.)
I do overall like the language. Especially for GUI code, which tends to have lots of stuff doing relatively straight-forward things, the
style of method naming, while verbose, is awesome -- the code is extremely easy to read, and it's well-neigh-impossible to get bugs by mixing up the method order. (Compare this to e.g. python, where you have neither type system nor naming to tell you if you mess up parameter order.)
There are some annoying circumstances where you still have to check for nil, so it's not exactly consistent. For example:
1. Adding nil to an array triggers an exception.
2. Trying to set a dictionary key/value pair with nil for either value triggers and exception. It'd be more consistent with other parts of the code base if setting a nil value unset that particular key.
3. Calling a block storage variable that's nil will crash.
I suppose it depends on what you consider consistent (i.e., all nil operations should be no-ops or just message-sending operations), but that only covers 1 and 2. Trying to invoke a nil block still crashes.
That's because blocks aren't completely objective-c objects. They can act like them in some cases, but underneath they are still C based. That's why you pass NULL instead of nil to specify an empty block.
All of Objective-C is C based in the end. Blocks are completely Objective-C objects. Attempting to call a NULL block crashes not because blocks aren't Objective-C objects, but because block invocation is not a message send.
Note that accessing an instance variable of a nil Objective-C object pointer will also crash, e.g.:
obj->_ivar;
That's obviously not because Objective-C objects aren't Objective-C objects. The crash when calling a nil block is the same thing.
>At least in javascript it's easier with the && operator:
str && str.doSomething()
Mixing a Boolean check and a potentially side effecting function in the same line of code is a neat hack, but it seems like it would make code harder to maintain. Also, what about handling the non existence of str?
I'm considering writing a similar post. For me the biggest write was how steep the learning curve for the whole Cocoa (referring mostly to UIKit, but also to a few other pieces) shebang was. It's not even that the system is badly designed (arguably it could be a lot better though) but that the documentation is so so bad. API docs are okay, but don't give you the big picture. Their reference guide, on the other hand are so abstract and 1000-mile high that you don't really get practical value out of them.
After that, the system with small inconsistencies, imprecise documentation and undocumented behaviour. In short, it's not very "systematic". You don't feel there's a well thought-out system that will allow you to predict what will happen. This is especially true when talking about layout, but also other things such how views / controllers interact. There's not a culture of "best practices", so interfacing with someone else's UI libraries can be hell.
I haven't done enough UIKit stuff to really comment on it, other than to say I also still find a lot of the conventions confusing or at least weird and also find that the documentation fails to answer my questions.
I also don't get the distinction between a UIView and a UIControllerView. In C# land I always associated view=.xaml and controller=.cs. In obj-c land both views and view controllers have a .xib and .m/.h, so it's not clear to me how responsibilities are divided.
Thanks, that's actually a helpful thing to mention and obvious in hindsight. A view controller can be the owner of a .nib but not the class of a .nib, so the .nib is never a view controller.
Are there particular divisions of responsibility that come up w.r.t. what you put in a view's .m/.h vs what you put in the controller's .m/.h? Is the .m/.h of a view considered to be a controller in the MVC sense but not in the UIViewController sense?
I don't intend to belittle the author's writeup of their experience with Objective-C. I've been working with the language off and on for over ten years, now, and have been working almost exclusively in it professionally for the last 3 or 4, and I have a ton of complaints about it. But...
With one exception, I feel like this entire post could be summed up as 'I don't like dynamic languages, static typing FTW!' The one exception is with regard to merging pbxproj files, which, I agree, is a fantastically awful experience. The only worse experience I have ever had working with this stack is in trying to merge XIBs (short answer: just don't even bother trying).
Getting off topic, but have you seen the new xib format in Xcode 5? It's totally different and way better. I haven't actually tried merging one yet, but I'm highly optimistic for when it comes up.
No, I haven't. I'm guessing you have to either explicitly upgrade your existing XIBs to it, or recreate them? Thanks for the head's up, I'll check it out.
Also, thanks again for the ridiculously awesome blog. It's been one of my favorite reads for years.
edit: OK, I had a chance to check out the file format. This looks far more sane and human-readable than it has in the past.
Related: I will happily pay $100 to anyone for a piece of software that has an auto layout constraint generator that works with XIB files and doesn't suck.
I think you have to manually convert existing ones. I'm fuzzy on the exact details, but I think there's a popup in the right-side info pane that lets you choose which Xcode version (and thus which format) you want to use for the xib.
The type system is definitely where I feel the most friction.
However, I think a summary of "dynamic bleh" omits the bits of praise and discussion that I consider important to the content. I'm also not sure if nil-vs-null counts as dynamic-vs-static.
In defense of ObjC's type system (which is the author's major gripe).
It is a duck-typed language. This is the philosophy that "if it walks like a duck... it's a duck." This is the philosophy used by lots of modern languages like Python, Ruby, and JS.
Meanwhile there is the statically-typed system, which wants you to declare things to be ducks explicitly (or to pick up a duck declaration through an inheritance system). This is the philosophy used by Java, C++, etc.
Anyway, the author's gripe isn't with ObjC's types. The author's gripe is with any duck-typed system. The whole point of these type systems is to avoid providing any strong guarantees about types. If you want strong typing, use a strongly-typed language. C++ may align better with this author's programming ideology.
It's also worth pointing out that runtime type checks are, all else being equal, slow. ObjC is used primarily in mobile environments where speed is a concern. As the author discovered, you can opt in to type checking by using NSAssert. This forces you to think about whether the cost of a dynamic type check is acceptable. NSAsserts are also disabled by default in release builds, so this gives you checking when you need them (development) and speed when you don't (production).
C++'s type system is deliberately constructed in such a way that the compiler can do the bulk of the type enforcement, which is much faster than runtime type checking. Alternately stated, this means that the programmer must write code in such a way that the compiler can infer/enforce type checks, or else they will get type errors. The author's complaints about not enough compiler errors seem to suggest to me that he would be more at home in a language like C++ that is specifically designed for this purpose.
I definitely prefer static typing over dynamic typing. I don't really mind duck typing so much as I mind dynamic duck typing (for example I like how Go does duck typing).
I do try to mention that preference for static typing in the post a few times, to make it clearer that it's the dynamism that's bothering me (as opposed to the design being wrong).
Discussions about typing seem pretty fruitless because there are often a lot of people talking past each other because they're not working off of shared definitions of what the terms (typed untyped static dynamic strong weak) actually mean.
A better comparison here is with C/C++/go, e.g. other languages that are designed to be fast and compiled.
In any of those, dereferencing a null pointer is undefined. The compiler can literally emit code that plays Nethack in this situation [0]. ObjC's "nothing will happen" behavior is a significant improvement in safety against similarly-situated languages.
Not defending "undefined" here, at all. I'd rather see the definition be, at least, exit(0xff) (or some other mechanism that reliably denotes "method not found", "message not found", etc).
I'm pretty sure I'm on record in HN for ranting about the fact that C & C++'s undefined & funny behavior literally created multimillion dollar industries in building analysis tools to cove their gaping problems. Which is why I am tend to boost Rust when possible (designed for correctness and determinism! woo!).
Dropping errors is not IMO better or worse. Both are silent failure modes.
Well, first, you aren't calling a method, you are sending a message. It's different. So, of course, sending a message to somebody that isn't home is going to yield you an empty response.
Er... I would hope that I get some sort of Not At This Address notification (in some fashion - exception, special return value, option type, etc, etc, etc.). Dropping the ball like that by default is really not helpful in building correct systems.
In C/C++, dereferencing the null pointer is undefined. That is a hell of a lot more dangerous than doing nothing. ObjC's "defined to do nothing" behavior is a significant step up from the other languages in the "fast, compiled, bare metal" cluster (C,C++).
It's rather subjective to say that's a "significant step up". "Dereferencing a null" indicates a potentially serious underlying issue that the "do nothing" hand-holding you describe masks.
nil just always comes back with 0 or nil or whatever -- it's like a really dumb version of None in the Maybe monad, and trying to get you some of the convenience (except way, way worse from a safety perspective).
For extra fun, NSNull (the "designated" object (as opposed to an actual zero value of nil) to represent nil) doesn't happily respond to any message sent to it -- i.e., if you replace nil with NSNull somewhere in the code, it will most likely blow up.
In my experience, the BCL is more comprehensive. For example, the iOS framework lacks a built-in queue or a sorted list (as far as I know).
One thing Obj-C does have, that I miss in C#, is equality methods defined for collections. You can compare the contents of arrays and sets and dictionaries without writing any special code or using special methods. (Although you can break it by creating cyclical structures, since it does naive recursion instead of unification.)
The convention for this sort of thing is to use an NSArray, inserting at position 0 to enqueue and removing the last object to dequeue. The NSArray class is actually written in a way that makes this efficient (for some great analysis see http://ridiculousfish.com/blog/posts/array.html).
or a sorted list (as far as I know)
For sorted lists, I think you're right, you'd have to do it yourself.
The question a Cocoa old-hand might ask would be if you _really_ need it to be always-sorted, or if you can keep your objects in an array, dictionary or set and sort them before using or iterating - or maybe just iterate the collection in a specific order instead of sorting it at all.
If you do need to keep it sorted all the time, note that _inserting_ into an NSArray is also more efficient than you might assume. Keeping a sorted NSArray up to date yourself might not be expensive (it would be a bit more complicated than an auto-sorting class though, of course).
So Objective C is not what you're used to? Awh... It's no secret that Objective C software is generally richer and more robust, thanks exactly to its "strange" differences. Sure it takes effort to learn a new way of doing things but if you're willing, you'll soon realize it was worth it. Dynamic and compiled is a great combination.
// As in C#/Java if (obj != null) { obj.doSomething(); }
At least in javascript it's easier with the && operator:
str && str.doSomething();
In Obj-C you can just:
[str doSomething]; // If you really care if it happened or not, then introduce a check for nil
Once you get used to the idea that nil doesn't break your code, you start writing your methods accordingly.
For me, a lot of the stuff you write about (ARC, Blocks, literals) are the great improvements to the language.
Before ARC you had C/C++ like control of the memory and you could do some crazy stuff like an object holding on to itself and then committing suicide (pretty useful actually). With ARC you only have to care about retain cycles and declaring weak/strong pointers. It's an awesome feature for a non-GC environment.
Blocks are truly the biggest change in Obj-C and made the code much easier to develop. Any async task is now easy to deal with. Alongside GCD, multi-threading is made simple.
What I'm really trying to say is that from the first iPhone OS to iOS7, we've seen some major improvements to the language. If it continues at the same pace, you'll might like it more in your next year's review.