Hacker News new | past | comments | ask | show | jobs | submit | pajuc's comments login

It's really hard for me to read past Lattner's quote. "Beautiful minimal syntax" vs "really bad compile times" and "awful error messages".

I know it's not helpful to judge in hindsight, lots of smart people, etc.

But why on earth would you make this decision for a language aimed at app developers? How is this not a design failure?

If I read this article correctly, it would have been an unacceptable decision to make users write setThreatLevel(ThreatLevel.midnight) in order to have great compile times and error messages.

Can someone shed some light on this to make it appear less stupid? Because I'm sure there must be something less stupid going on.


> aimed at app developers

I'm a native Swift app developer, for Apple platforms, so I assume that I'm the target audience.

Apps aren't major-league toolsets. My projects tend to be fairly big, for apps, but the compile time is pretty much irrelevant, to me. The linking and deployment times seem to be bigger than the compile times, especially in debug mode, which is where I spend most of my time.

When it comes time to ship, I just do an optimized archive, and get myself a cup of coffee. It doesn't happen that often, and is not unbearable.

If I was writing a full-fat server or toolset, with hundreds of files, and tens of thousands of lines of code, I might have a different outlook, but I really appreciate the language, so it's worth it, for me.

Of course, I'm one of those oldtimers that used to have to start the machine, by clocking in the bootloader, so there's that...


I tried to fix a bug in Signal a few years ago. One part of the code took so long to do type inference on my poor old Intel MacBook that the Swift compiler errored out. I suppose waiting was out of the question, and I needed a faster computer to be able to compile the program.

That was pretty horrifying. I’ve never seen a compiler that errors nondeterministically based on how fast your cpu is. Whatever design choices in the compiler team led to that moment were terrible.


That sounds like Web sites, designed by designers with massive monitors.

The tools team probably had ultra-fast Macs, and never encountered that.

It definitely sounds like a bug in the toolset. I hope that it was reported.


The author mentioned zig. And zig would get this right you can just write `setThreatLevel(.midnight)`

But where zig breaks down is on any more complicated inference. It's common to end up needing code like `@as(f32, 0)` because zig just can't work it out.

In awkward cases you can have chains of several @ statements just to keep the compiler in the loop of what type to use in a statement

I like zig, but it has its own costs too


I'm not very in the loop regarding zig at the moment, but coming from C I would think you could just use 0.0 or 0.0f? Is that not the case?


Depends on the context but in general Zig wants you to be explicit. Writing 0.0 is fine at comptime or if you're e.g. adding it to an already known float type, but defining a runtime variable (var x = 0.0; ...) will not work because x doesn't have an explicit type (f16, f32, f64, f80, or f128?). In this case, you would need to write "var x: f32 = 0". You could write "var foo = @as(f32, 0)" but that's just a weird way of doing things and probably not what OP meant.


I can’t offer much in the way of reasoning or explanation, but having written plenty of both Swift and Kotlin (the latter of which being a lot like a more verbose/explicit Swift in terms of syntax), I have to say that in the day to day I prefer the Swift way. Not that it’s a terrible imposition to have to type out full enum names and such, but it feels notably more clunky and less pleasant to write.

So maybe the decision comes down to not wanting to trade off that smooth, “natural” feel when writing it.


Bear in mind that in both Java and Kotlin you can statically import enum entries:

    import foo.SomeEnum.MIDNIGHT

    setThreadLevel(MIDNIGHT)
In practice, you write out ThreatLevel.MIDNIGHT, let the IDE import it for you, and then use an IDE hotkey to do the static import and eliminate the prefix.


This is true, with the caveat that your imports don’t share member names in common (e.g. you can’t static import two “medium” members, one from SpiceLevel and the other from PizzaSize). Swift doesn’t have this restriction, at least as far as enums are concerned.


Swift’s stated goal has always been to be a language which can scale from systems programming to scripts. Apple themselves are writing more and more of their own stuff in Swift.

Calling it “a language aimed at app developers” is reductive.


Here's some light to make it appear less stupid:

He doesn't claim its not a design failure.

He doesn't say they sat down and said "You know what? Lets do beautiful minimal syntax but have awful error messages & really bad compile times"

The light here is recursive. As you lay out, it is extremely s̶t̶u̶p̶i̶d̶ unlikely that choice was made, actively.

Left with an unlikely scenario, we take a step back and question if we have any assumptions: and our assumption is they made the choice actively.


The alternatives are even less charitable to the Swift creators.

Surely, early in the development someone noticed compile times were very slow for certain simple but realistic examples. (Alternatives: they didn't have users? They didn't provide a way to get their feedback? They didn't measure compile times?)

Then, surely they sat down considered whether they could improve compile times and at what cost, and determined that any improvement would come at the cost of requiring more explicit type annotations. (Alternatives: they couldn't do the analysis the author did? The author is wrong? They found other improvements, but never implemented them?)

Then, surely they made a decision that the philosophy of this project is to prioritize other aspects of the developer experience ahead of compile times, and memorialized that somewhere. (Alternatives: they made the opposite decision, but didn't act on it? They made that decision, but didn't record it and left it to each future developer to infer?)

The only path here that reflects well on the Swift team decision makers is the happy path. I mean, say what you like about the tenets of Swift, dude, at least it's an ethos.


> Alternatives: they didn't have users?

Correct, it is well known that they kept Swift a bizarre secret internally. It seems no one thought it would be a good idea to consult with the vast swathes of engineers that had been using the language this was intended to replace for the last 30 or so years, nor to consult with the maintainers of the frameworks this language was supposedly going to help write, etc. As you can imagine, this led to many problems beyond just not getting a large enough surface area of compiler performance use cases.

Of course, after it was released, when they seemed very willing to make backward-incompatible changes for 5 years, and in theory they then had plenty of people running into this, they apparently still decided to not prioritize it.


Quick note, a lot of the broader things you mention are exactly the case, ex. prioritizing backwards compatibility and ABI stability at all costs was a big kerfuffle around Swift 3/4 and publicly documented. ex. limited use initially

Broad note: there's something off with the approach, in general. ex. we're not trying to find the interpretation that's most favorable to them, just a likely one. Ex. It assumes perfect future knowledge to allow objectively correct decisions on sequencing at any point in the project lifecycle. ex. Entirely possible they had automated testing on this but it turns out the #s go deep red anytime anyone adds operator overloading anyway in Apple-bunded frameworks.

Simple note: As a burned out ex-bigco: Someone got wedded to operator overriding and it was an attractive CS problem where "I can fix it...or at least, I can fix it in enough cases" was a silent thought in a lot of ICs heads

That's a guess, but somewhat informed in that this was "fixed"/"addressed" and a recognized issue several years ago, and I watched two big drives at it with two different Apple people taking lead on patching/commenting on it publicly


So they didn't focus actively on good error messages and fast compile times when designing a new language?


If we have to flatten it to "they chose and knew exactly what choice they were making", then there's no light to be shed. Sure. That's stupid.

Its just as stupid to insist on that being the case.

If that's not convincing to you on its merits, consider another aspect, you expressly were inviting conversation on why that wasn't the case


Why is there no light to be shed?

This is a perfectly reasonable question to ask. And a straight simple answer might be that no, they didn't. Or not initially but later it was too late. Or here are the circumstances in leadership, historical contexts that led to it and we find those in other projects as well.

That would be interesting to hear.


I've kind of lost the plot myself. XD The whole concept seems a bit complicated to me.

You're holding out on responding constructively until someone on the Swift team responds?

Better to just avoid boorishness, or find another way to word your invitation to the people who you will accept discussion from.

I wouldn't go out of my way to engage in the comments section with someone who calls me stupid repeatedly, based on an imagined analysis of my thought process being that of a small child, then refuses to engage with any discussion that doesn't start with yes, my team was stupid, we did actively choose awful error messages and really bad compile times for pretty syntax.


The world is this even saying.


The designers of the language didn't intend for it to end up this way, it just worked out like it did. GP is pointing out that their parent assumed it was intentionally choosing pretty syntax over speed, when it was more likely for them to start with the syntax without considering speed.


What's the difference between "choosing pretty syntax over speed" and "start with syntax without considering speed"?


Intentions.

The first is a conscious decision, the second is not.


Honestly it follows the design of the rest of the language. An incomplete list:

1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).

2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.

3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.

4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...

5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.

6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".

The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.

I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.


This is a great comment, you clearly know what you're talking about and I learned a lot.

I wanted to push back on this a bit:

> The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).

From an outsider's perspective, this was the point of Swift: Objective C was and is hard to optimize. Optimal code means programs which do more and drain your battery less. That was Swift's pitch: the old Apple inherited Objective C from NExT, and built the Mac around it, back when a Mac was plugged into the wall and burning 500 watts to browse the Internet. The new Apple's priority was a language which wasn't such a hog, for computers that fit in your pocket.

Do you think it would have been possible to keep the good dynamic Smalltalk parts of Objective C, and also make a language which is more efficient? For that matter, do you think that Swift even succeeded in being that more efficient language?


Let me preface this by saying performance is complicated, and unfortunately far more... religious than you'd expect. A good example of this is the Objective-C community’s old insistence that good performance was incompatible with garbage collection, despite decades of proof otherwise. This post [1] about Swift getting walloped by node.js is fascinating in seeing how a community responds with results that challenge their expectations (“what do you expect, the JS BigInt is optimized and ours isn’t”, “It’s slower, but uses less RAM!”, etc.). As it turns out, questions like GC vs. reference counting, often end up being much more nuanced (and unsatisfactory) than which is simply "faster". You end up with far more unsatisfying conclusions like one is more deterministic but often slower (and as it turns out most UIKit apps aren’t realtime systems).

All this to say, it is hard to answer this question in one comment, but to try to sum up my position on this, I believe the performance benefits of Swift were and remain overblown. It’s a micro benchmark based approach, which as we’ll see in a second is particularly misguided for Swift's theoretically intended use case as an app language. I think increasingly people agree with this as they haven't really found Swift to deliver on some amazing performance that wouldn’t have been possible in Objective-C. This is for a number of reasons:

1. As mentioned above, the most important flaw with a performance based Swift argument is that the vast majority of the stack is still written in Objective-C/C/etc.. So even if Swift was dramatically better, it’s only usually affecting your app’s code. Oftentimes the vast majority of the time is spent in framework code. Think of it this way: pretend that all of iOS and UIKit were written in JavaScript, but then in order to “improve performance” you write your app code in C. Would it be faster? I guess, but you can imagine why it may not actually end up having that much of an effect. This was ironically the bizarre position we found ourselves in with Swift: your app code was in a super strict typed language, but the underlying frameworks were written in a loosey-goosey dynamic language. This is exact opposite of how you'd want to design a stack. Just look at games, where performance is often the absolute top priority: the actual game engine is usually written in something like C++, but then the game logic is often written in a scripting language like Lua. Swift iOS apps are the reverse of this. Now, I'm sure someone will argue that the real goal is for the entire stack to eventually be in Swift, at which point this won't be an issue anymore, but now we're talking about a 20-year plan, where it seems weird to prioritize my Calculator app's code as the critical first step.

2. As it turns out, Objective-C was already really fast! Especially since, due to its ability to trivially interface with C and C++, a lot of existing apps in the wild had already probably topped out on performance. This wasn't like you were taking an install base of python apps and getting them all to move over to C. This was an already low-level language, where many of the developers were already comfortable with the "performance kings" of the C-family of languages. Languages, which, for the record, have decades of really really good tooling specifically to make things performant, and decades of engineering experience by their users to make things performant. And so, in practice, for existing apps, this often felt more like a lateral move. I actually remember feeling confused when after the announcement of Swift people started talking about Objective-C as if it was some slow language or something. Like, literally the year before, Objective-C was considered the low-level performance beast compared to, say, Android's use of Java. Objective-C just wasn't that that slow of a comparison point to improve that much on. The two languages even share the same memory management model (something that ends up having a big affect on its performance characteristics). Dynamic dispatch (objc_msgSend) just does not really end up really dominating your performance graph when you profile your app.

3. But perhaps most importantly, I think there is a mirror misguided focus on language over frameworks as with the developer ergonomics issues I pointed out above. If you look at where the actual performance gains have come from in apps, I’d argue that it’s overwhelmingly been from conceptual framework improvements, not tiny language wins. A great example of this is CoreAnimation. Making hardware accelerated graphics accessible through a nice declarative API, such that we can move as much animation off the CPU and onto the GPU as possible, is one of the key reasons everything feels so great on iOS. I promise no language change will make anywhere near as big of a dent as Apple's investment in CoreAnimation did. I’d argue that if we had invested development time in, e.g., async/await in Objective-C, rather than basically delaying that work for a decade in Swift, we’d very possibly be in a much more performant world today.

Anyways, these are just a few of thoughts on the performance-side of things. Unfortunately, as time moves on, now a decade into this transition, while I find more people agreeing with me than, say, when Swift was first announced, it also becomes more academic since it's not like Apple is going to go back and try to make Objective-C 3 or something now. That being said, I do think it is still useful to look back and analyze these decisions, to avoid making similar mistakes in the future. I think the Python 2 to 3 transition provided an important lesson to other languages, I hope someday we look at the Swift introduction as a similar cautionary tale of programming language design and community/ecosystem stewardship and management.

1. https://forums.swift.org/t/standard-vapor-website-drops-1-5-...


To add to the GC discussion, something that many that weren't around during the GC project failure for Objective-C, is that ARC was pivot from a failed project, but in good Apple fashion that had to sell the history on their own way.

The GC for Objective-C failed, because of the underlying C semantics, it would never be better than a typical conservative GC, and there were routinely application crashes when mixing code compiled with GC and non-GC options.

Thus they picked up the next best strategy, which was to automate the Cocoa's retain/release message pairs, and sell that as being much better than GC, because performance and such, not because the GC approach failed.

Naturally, as proven by the complex interop layer in .NET with COM, given Objective-C evolution, it would also be much better for Swift to adopt the same approach, than creating a complex layer similar to CCW/RCW.

Now everyone that wasn't around for this, kind of believes and resells the whole "ARC because performance!" story.


Do you happen to have any source/book on why you can't use anything but a conservative gc on C-like languages? I would really like to know why that's the case.


Basically C semantics are to blame, due to the way C was designed, and the liberties it allows its users, it is like programming in Assembly from a tracing GC point of view.

Meaning that without any kind of metadata, the GC has to assume that any kind of value on the stack or global memory segments is a possible pointer, but it cannot be sure about it, it might be just a numeric value that looks like a valid pointer to GC allocated data.

So any algorithm that needs to be certain about the exact data types, before moving the wrong data, is already off the table in regards to C.

See https://hboehm.info/gc/ for more info, including the references.


Thank you!


Great posts. Objective-C is still my programming language of choice.

> Now, I'm sure someone will argue that the real goal is for the entire stack to eventually be in Swift, at which point this won't be an issue anymore, but now we're talking about a 20-year plan, where it seems weird to prioritize my Calculator app's code as the critical first step.

It seems like it is the goal for at least some people at Apple. But so many Swift frameworks rely on Objective-C frameworks (SwiftUI wraps lots of UIKit. SwiftData is built on top of CoreData, etc.)

In twenty years Swift will be roughly the same age Objective-C was when Swift was introduced (give or take). By then the Swifties will be getting old and gray. I think it’s reasonable to bet that some young blokes will be pushing a new programming language/UI framework by then. I’m not sure Apple can replace the entire Objective-C stack even if they wanted to. Maybe if they spent the next five years not working on any new features and did nothing but port all the frameworks to pure Swift (we know Apple will never do that).

Unless a new hardware platform takes off and supersedes iOS/macOS and starts Swift only I just don’t think Apple can rid themselves of Objective-C (I personally think that they shouldn’t even want to get rid of Objective-C). But watchOS doesn’t have many developers and visionOS wants all existing iOS and macOS apps to work because they want a large ecosystem.

I sometimes wonder if Objective-C will outlive Swift. Sure it’s the underdog but I always root for the underdog. I hope someone will make an Objective-C 3.0 even if it isn’t Apple.


As someone interested in Apple app dev, would you recommend still starting with ObjC? I notice the dev behind the Swiftcord app (open source Discord client in Swift) has noted at length how much you still need to call into UIKit to get things done as there were a lot of blind alleys in SwiftUI.


Some insightful home truths there! I do enjoy programming in Swift more than ObjC but, ironically, it’s mostly on Linux.


I'd say the biggest problem was that they developed the language in isolation. Lattner is not making that mistake with Mojo


great context. the whole narrative you present kind of begs the question of whether swift could actually be a good systems language.

as a SwiftUI app dev user I feel like this (and the OP's post) lines up with my experience but I've never tried it for e.g. writing an API server or CLI tool.


Great short read on a concept we can find in many places, including software engineering or even UX design in general.


Could you explain what you see in more detail? I don't quite understand what you mean and I'd like to fix it.

Generally, scaling is based on the shorter edge of the frame. The edge length of the largest square is about 30% of that. All other squares follow in a similar fashion.


The article asks a fair question. The comments here raise fair points. At the end though, all of this activity is nothing else than doing philosophy.

Because what else would it be?

The article is a critique of some aspects of contemporary philosophy. Like many others did before (greeks). It can't be a dismissal of doing philosophy itself. Because asking about the worth of philosophy is a philosophical question.


> Because what else would it be?

Usually if you make a point like this, you state the definition of philosophy and show that the very thing falls under the definition.

It could be reasoning, application of common sense, rhetorical play, use of large language models ftw, finding of a common opinion. And that is just a minute of brainstorming.


True, and this has real effects on jobs.

There was a similar dynamic with translation market and Google Translate. It didn't matter that human translation was superior. What mattered was that Google Translate (or similar) brought down prices so much that it effectively destroyed the market for everyday-type of translations. Why? Because customers said that, well, just use Google Translate and then improve it a bit.


Thinking of a project where 1 person knows how to admin something with quite a bunch of Postman setups on their own computer.

At least provide CLIs.


Great article, very true.

"For some reason, corporate legibility tools often have poor UX for those who interact with them who are not administrators."

That's because administrators decide which of these tools to use and buy. Tools then focus on pleasing them first. This reminds me of a similar situation with doctors. Some years back there was a great article in the New Yorker called Why Doctors Hate Their Computer. There is a similar dynamic where doctors waste hours on documentation that is then read by hospital administrators who then base their decisions on that. To the administrators this is great, they can see patterns and react to them. To the doctors, not so much.


In my experience (not the views of my employer) the most important stakeholder for an EHR is the ONC (federal government). Design-by-committee requirements with the happy coincidence of making it almost impossible to innovate or disrupt the industry. They change all the time, requiring a ton of dev effort just to stay in business, which favors giant companies who can afford that "rent". Next is health system administrators (whoever decides whether or not to buy, directly affecting revenue growth). Next is insurance companies. For the amount of revenue they pull in, a shocking amount of practices/hospitals/etc are so financially insecure that if the payers drag their feet for even a short period it can put them out of business. Actual healthcare providers and patients are too low on the list and it sucks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: