I really, really dislike Swift's approach. get/set/willSet/didSet -- so much complication to preserve the illusion that you are operating on a field when you are actually doing no such thing. Why is this desirable? It reminds me of a class of C++ footguns where some innocuous code is actually invoking member functions due to operator overloading.
I think that Java got this one right. (I do not at all like the cargo cult custom of getter/setter methods for fields; I'm referring to the language only.)
GP was referring to the feature that getters/setters are invoked with property access. Swift's didSet/willSet distinction is unrelated to this feature. ES6 JavaScript, for instance, has this feature with just get/set methods.
In Java (and JS) there's a separate, all-knowing garbage collector, while swift is reference counted. In an ARC runtime, the didSet/willSet distinction avoids explicit calls to release the object, which is pretty clearly a good thing on the programmer's end. You can debate whether the benefits of full garbage collectors outweigh their performance characteristics, but given ARC, the didSet/willSet distinction definitely makes sense.
Today's fields are tomorrow's computed properties. Fields are rigid and cannot be changed without recompiling dependencies. Notice how few fields there are in Java's standard library. Why have fields at all?
Plenty of times! The situation where a library released to third parties requires internal structural changes is not an uncommon one. What do you do? Break every piece of third party code or satisfy the new structure and the old interface simultaneously with a computed property? "Move fast and break stuff" doesn't always have to include breaking stuff.
Honestly, many times. A common occurrence is singular evolves to multiple: "the touch position" becomes "the touch positions," "selected item" becomes "selected items", etc. The backing storage switches from T to [T].
In this scenario it's easy to make the old method do something sensible like "return the first item." But maintaining an old field is more difficult: you can't make it lazy, you can't eliminate its backing storage, etc.
Quite often, especially if you make a variation of an existing class (or an extra type in an ADT).
Scala does this quite transparent. Something defined as a `var` (variable), `val` (immutable variable), `lazy val` (lazy immutable variable) or `def` (method) is called source- and binary compatible from the caller side.
I've never seen this trick anyone, except for maybe unexpected bad performance.
Aside, from what the other poster mentioned, one major advantage is that I don't want to write getXXX and setXXX, everywhere where I do need computed properties, does it matter if the value is precomputed or lazy computed (for example for a float derived from another float)?
YAGNI. Seriously. That pretty much describes the main problem with Java - "maybe we'll need this coffee maker to also do julienne fries in the future!"
I think that Java got this one right. (I do not at all like the cargo cult custom of getter/setter methods for fields; I'm referring to the language only.)