In java, singletons are enforced by the type system - i.e. There's no public way to make more of the object. Usually that's what people mean by 'singleton' in java. If you don't have the source and you outgrow the single instance, there's nothing you can do except stop using the class.
In Objective-C, people do occasionally simulate that by mungng init and alloc, but that is definitely a bad idea. More often a singleton class will have a conventional init and alloc, but have a class method like +sharedInstance which lazily instantiates and dispenses a single instance of the object during the lifetime of the application. That's what people most often mean by 'singleton' in objective-c. It's basically a global variable, but when you outgrow it, it's often not hard to switch to allocating more than one.
Also, partly due to what many would see as a deficiency - the fact that it's much harder to produce architecture independent libraries in objective-c than java, the source is available more of the time, so evolving past a singleton is more often straightforward.
The correct way to deal with it is to realise that if you need some settings that can be accessed/changed from many different classes then the design of your app is severely lacking. However, this doesn't necessarily that you need a more complex/sophisticated 'design'. ie. if you stick to TDD this situation is highly unlikely.
Hopefully this will stem the tide of questions on Stack Overflow about different Objective-c singleton implementations.
The correct way to deal with this is to realize you have a problem? I think the above comment is willing to admit a problem and is asking for a solution. What does TDD have to do with it?
I'm also interested in hearing what is considered the best practice for things like passing around config options, an eventloop, or message bus.
The solution is essentially to take a hard look at your design and see why you have to pass around config options, event loops, or message buses to begin with. The need to access global state is a code smell.
When you DO need global state, then a singleton object is a reasonable way to achieve it. But chances are fairly high that you don't.
You can look for days, and if you don't know any of the alternatives, none will pop out.
As I'm not in the mobile sphere, I wouldn't know how to avoid a singleton for an app that is constrained by speed and space requirements. I've already shown an example above, what's your solution in this environment, with the constraints of the questioner?
> if you need some settings that can be accessed/changed from many different classes then the design of your app is severely lacking.
If he's making a game, this is completely acceptable design. Games programming is wildly different from application or server programming. Also, strict TDD is much, much less useful in games programming than anywhere else. Writing unit tests for central data structures and algorithms is useful, of course, but the majority of modules/classes/objects are so small, uncritical, and change completely so often that it's not worth bothering.
I worked at a world famous design studio. I was horrified to discover that there could be no 'brain storming' - everything you said or every drawing you showed would be judged and held against you. It made it almost impossible for someone to come up with a truly creative solution. You had to be very careful and measured about everything you said.
The key phrase is held against you. This means bad ideas were punished, giving people an incentive to not be risky. Whereas the point of Pixar's process is you don't have to punish bad ideas--they won't become consequential.
That's an impression I got from reading the post (no time to watch the video). I also thought: Well, how many times do you get to suck before they think you don't belong there? Some ideas do need time to percolate and encapsulating them in a soundbite can lead to rejection without understanding the underlying depth. [This was previously posted elsewhere minutes ago due to a mis-click.]
I want to also add: My comment might be misinterpreted to think the person sucking is someone who's incompetent. That's not what I meant at all. But anyone can have a run of plain bad ideas before they strike gold. Or even silver.
Not quite, Windows uses hinting information from the font, OSX doesn't. Many freely available fonts contain terrible or no hinting, and they will look bad at small sizes on Windows.
Even more important: Windows uses _very_ aggressive hinting for its default fonts, especially the new set introduced with Office 2007 and Windows Vista (Calibri, Cambria, Consolas). Though I very much like the effect for my programming font (Consolas is great in that respect), it destroys the scalability of a font to a very high degree (this is the reason why zooming a Webpage in any Browser reflows your text, and why some fonts look different in shape and/or height/width ratio in different zoom-levels; beside that rendering-engines decision to round the font-sizes to the next full pixel).
IE9 promises to not do this anymore, and on high-density displays (the better smartphone ones) it is simply not needed.
So the problem is two-fold: a very distinctive look of Microsoft fonts, and a sub-optimal fallback for non-aggressively hinted fonts in Windows.
However, if one uses font-sizes that one can actually read (for instance, bigger then 12pt, thank you all very much) the pixel-per-character count gets big enough that the results start looking better -- especially as the user does not unconsciously move his nose to meet the display in person.