Hacker News new | past | comments | ask | show | jobs | submit login
Swift 6 (swift.org)
397 points by todsacerdoti 73 days ago | hide | past | favorite | 354 comments



Swift would be perfect if it wasn't dying a death by 1000 cuts thanks to the inherent conflict in its governance.

Swift is caught between two clans: the Swift Working Group™ open-source community, and the Apple corporate entity who pays most of their salaries. Both have their own incentives and their own imperfections, but you guess who has the majority influence.

Ridiculous, permanent, tech debt such as hardcoded compiler exceptions are permanently living in the compiler codebase. Even worse, half-baked concepts such as result builders are pushed through without any real discussion because Apple wants the SwiftUI syntax to look pretty.

It's an amazing language still, but I can't see it surviving as nicely in the next 10 years if Apple doesn't learn to let go.


If I were new to Swift and saw how complicated and version-specific the Stackoverflow answers* are for things that are super simple in other languages, I wouldn't expect things to get much easier from there. And that instinct would be right.

* https://stackoverflow.com/questions/34540185/how-to-convert-... https://stackoverflow.com/questions/32305891/index-of-a-subs... https://stackoverflow.com/questions/39677330/how-does-string... https://stackoverflow.com/questions/24250938/swift-pass-arra...


Three of those really are very specific to string manipulation, and doing it "right" (with all the possible representations of what a string can be) is inherently complex. I think Swift landed on the wrong side of defaults for this API, opting for "completely safe and correct" over "defaults to doing what I expect 99% of the time"

You can get a `utf8` or `utf16` "view" of a string, and index it like normal array (`myString.utf8[0]` gets the first utf8 character). But it's not going to work with complicated emoji, or different languages that may have representations into utf16, etc. Again, I think the vast majority of people don't care for complete correctness across all possible string representations, and possibly Swift has gone too far here — as noted by all the Stack Overflow posts and clunky API

On the array-pass-by-reference, I'd argue that it's valuable to learn those semantics and they aren't particularly complicated. They offer huge benefits relating to bugs caused by unintentionally shared state. Marking the parameter `inout` is a small price to pay, and really forces you to be explicit about your intentions


Swift was designed around emojis it seems. First page in the manual shows how you can use emojis as variable names. I get why Apple wants to be clear how there are different ways to index into strings (even if this is motivated 99% by emojis like "family: man woman boy boy skintone B"), but still, the API didn't have to be this confusing or have so many breaking changes after GA.

About structs and pointers, I'm familiar with them in C/C++ where the syntax is clear and consistent. It's not consistent in Swift. And arrays don't even act like regular structs, I forgot: https://stackoverflow.com/questions/24450284/conflicting-def...


Or you know, the couple of languages people speak that are not using ASCII…


The issue here isn't ASCII vs unicode, it's specifically symbols that are composed of multiple unicode codepoints.


(Which btw isn't exclusive to emojis, there's also Hangul Jamo that the Unicode standard says is used for "archaic" Korean characters, but emojis are probably the main use case.)


FWIW, that answer (to the second link, after edit) is really old, and you can do string.firstRange(of: substring) since Swift 5.7.

The top answer to your third question gives a pretty good explanation of why Swift string indices work the way they do (as well as showing nicer ways to spell a lot of the operations on them), which mostly addresses the first and third questions. It really seems that your last link is just asking for the `inout` modifier; I'm not sure why that one is especially confusing.

Obviously, there's always stuff that can be further improved, but none of these are especially onerous (once you get past the "string indices are not just integers" step, at least--for people who really just want to pretend the world uses ASCII or work with UTF8 bytes directly, string.utf8 may be an nicer interface to use).


The thing is, each string-related answer ended up extending it with some methods that everyone wanted it to have in the first place, and the top-voted comments are like "why do we have to do this." It also shouldn't have required multiple updates to each answer.

The time I was doing a lot of string manipulation in a team Swift project, we had to write our own wrapper that basically stored strings as arrays of printable characters because the native one was too annoying. This also protected us from all the breaking changes Apple made to strings across Swift versions.

The inout one is different. It's confusing that arrays and dicts are structs, which have different rules from regular objects, and the syntax takes some getting used to:

  func addItem(_ localArr: inout [Int]) {
    localArr.append(4)
  }


> It's confusing that arrays and dicts are structs, which have different rules from regular objects

As a long-time assembly and C programmer and now Swift programmer, I would say that structs _are_ regular objects, and things with reference semantics are weird. It all depends on your point of view!


I'm fine with either approach as long as it's consistent. Practical Swift code will have a few "inout"s sprinkled around, wherever someone happened to pass in a struct (like array) instead of an object (like NSArray). And that's without bridging and stuff like UnsafePointer involved.

Edit: Just remembered that arrays still don't act like regular structs either! https://stackoverflow.com/questions/24450284/conflicting-def...


The answer you link to there is from the Swift 1 days in 2014. It was absolutely true then, but array has had true value semantics since shortly after that answer was written.


Ah, it's coming back to me now. I remember when they fixed that and I was relieved.


I'm fine with either approach if it's easy to know which one I'm using at a any particular call site. C# started the tradition of moving this information to the type level, far away from the call site. Swift has adopted this terrible idea.

The philosophy of making code look "clean" at the cost of hiding critical information in some far away place is the biggest mistake in programming language design. If code happens to look clean it should be the result of being simple rather than being cleansed for effect.

Other bad ideas in the same vein are computed properties and function builders.


Computed properties are annoying too. More importantly, they're one more inconsequential thing for SWEs to argue over in code review.


What do you mean by moving information to the type level?


In C and C++ (and many other languages) the distinction between value and reference/pointer is part of the variable declaration. In C# and Swift that information belongs to type definitions (except for inout parameters). Structs are value types and classes are reference types.

Type definitions are usually further away from the call site than variable declarations. Also, in C, accessing a member of a struct through a pointer requires the -> operator providing yet another clue as to whether you're using a pointer or a value.

In my opinion, this distinction is absolutely key. It should always be obvious at the call site. I don't think there is anything that has caused more bugs in my Swift code than getting this wrong.

Change a type from class to struct or vice versa and chances are your code still compiles when it really shouldn't because it's now completely broken.


If there are identity having classes (reference/pointer) that may be mutable, and value types that are always immutable, then I think it can be an “implementation detail”, part of the type, not changing semantics.

If you can’t change a struct’s field, only the whole struct, then I believe it’s fine - and the compiler may decide to copy it or change in-place depending on its available context, is it not the case?


In a language without mutability (or perhaps with something like borrow checking?), it would not be a problem. But in Swift, the option of introducing mutability always exists.

It's not impossible to impose some sort of discipline to mitigate these issues, but why is it a good idea to make values and references indistinguishable at the call site? I don't get it.

But it's not surprising. I don't get a lot of decisions that language designers have been making. The entire drive towards more and more syntactical abstraction in order to make everything look like a DSL is a mistake in my opinion.


Refs in C++ go against this, right? At the call site, it's not clear that you're passing in a ref instead of a value. I don't mind it too much cause at least the rules are the same for everything, not different for structs vs classes.


Yes. At least references are still part of the variable type and therefore likely to be closer to the call site.


Can you explain how result builders are half-baked?

They’ve been used for more than SwiftUI by now. The Regex support since 5.7 uses them: https://www.hackingwithswift.com/swift/5.7/regexes


Because they rely on variadic generics they can make type inference very slow or just fail completely, leading to the infamous and singularly unhelpful compiler error, "The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions".


Swift is the only language where I've had to fight the compiler to do its job. In earlier versions like 1.x and 2.x, it would often segfault. By 3.x it was still really slow to build. I regretted moving a project off ObjC back then.

I thought maybe that was all fixed by now, but guess not?


On paper Swift has a lot going for it. In practice it's easily the worst devx out of the modern languages. And SwiftUI is still so full of bugs and performance pitfalls I'm actually quite pessimistic about the future of native apps on Apple platforms.


To be honest, the way they are damaging their brand/products/OS just to make a bit more money is enough to be pessimistic about Apple.

But it's very true that the state of the language can be felt in their native apps, that tend to suck pretty bad recently. I still can't get over the nightmare that is the split up of iTunes; at least we knew that it was clunky because of old age, the new stuff is just bad.


Yeah there's a reason people go to all that effort with React Native to avoid writing Swift code or dealing with Apple's UI frameworks, and it's actually a reasonable approach for the majority of apps.


My main app is a cross platform Flutter app. I've considered rewriting it in Swift because most of my users are on macOS or iOS but all the prototypes I've written are actually slower even after extensive performance work and the development experience makes me want to tear my hair out.


Ironic, given Flutter's infamy regarding performance (jank).


I'm actually surprised at this because while UIKit is hard to use, at least it's fast. Though I remember the concurrency model being confusing, so you could accidentally block your UI thread.


UIKit is pretty fast although a major step down in dev velocity.

AppKit on the other hand seems to be pretty intrinsically slow and the controls are looking increasingly dated.


Odd criticism.

UIKit is the iOS counterpart to MacOS’s AppKit and both are implemented as convenience wrappers around CALayers. They are also infinitely customizable. You can overload UI/NSView and draw vector-pen style on a blank canvas or render whatever you want on a GPU frame buffer. This is how MapKit, Safari, and the Camera view is implemented.


Not sure what you mean by "implemented as convenience wrappers around CALayers," especially when it comes to NSView where you have to opt-in to layer-backing.


It’s a criticism from recent experience trying to build AppKit based UI. The examples you list barely use the stock widgets.

There’s decades of accumulated cruft in Cocoa that Apple discarded when implementing iOS.


Yeah I worried about that going in too but in fact I've found it much easier to get good performance with Flutter than SwiftUI, especially for large collection views and especially on the mac.

The work the Flutter team did on Impeller seems to have paid off.


You should try to implement the iOS photos.app in flutter and see how that goes. This requires scrolling through gigabytes of photos as fast as your finger can swipe without a single hint of slowdown or a loading spinner. And it’s been that fast since.. iOS 7?

Yeah it’s not the language or the SDK that’s slow. Rather it’s inexperienced, lazy, or overworked developers that can’t/won’t write performant software.


I’ve been building iOS apps since before Swift existed. Sure like I said if you code directly to UIKit and take a little care performance is good. It’s also very fast in Flutter with even less care. Rendering images in a grid isn’t hard unless your core abstractions are all wrong.

Now try that in SwiftUI. You’ll be forced back to UICollectionView.


That’s cool. I’ve been developing on Mac before Objective-C 2.0 and iOS since the AppStore was released. Millions of downloads, featured in the store, worked on dozens of projects from video games to MFi firmware, and have been invited to Cupertino to meet with teams.

I’m not defending SwiftUI. I mostly use it as a wrapper around NS/UIKit because it’s still buggy and not as flexible.

By the way, SwiftUI is also implemented on top of CALayers just like NS/UIKit. It can be fast in theory, but you have to know exactly where the pain points are and most developers don’t know how to do that.


I'm not sure why you keep bringing up CALayers. That is not where the performance bottleneck lies in SwiftUI.


I don’t think it’s impossible with proper caches to smaller dimension versions (that supposedly Apple already generates/has access to - like they are doing a bunch of processing, like object recognition, etc).


It's only that fast is the thumbnails are not too slow, the data is all there on device, the phone isn't RAM starved, and you largely have the latest iteration of iPhone for the current software, with the most powerful chip available.

In this way, yeah, it's pretty fast. Any other way it's blank square galore.


People go to React Native to stay on their cozy Web skills, it is exactly the same if we would be talking about Microsoft and Google platforms.


I started using RN when I had 0 web skills and didn't know JS. Everything from making a simple button to hooking up the model was easier to me in RN from day 1 than the native iOS way that I'd been using for years.


As someone that unfortunately has to deal with React ecosystem, and knows reasonably well native programming across Apple, Google and Microsoft ecosystems, I have some hard time believing that, but might be a knowledge issue.


The Google ecosystem? Isn’t that also just Web-based?


Not Android


I mean, I figure the more compelling reason to do that is so you can also ship an Android app without writing everything twice.


Sorta, but it's not as easy as they make it sound. And people will use RN even for iPhone-first stuff.


except for the thousand times you end up having to dip down into native components


I'd say most of the time it's a handful of times or less. Uniswap is a good example of a large OSS three-platform app that shares almost all the code, uses very few native dependencies, and has great UX. I maybe biased since I worked there and made the UI framework they use, though.

https://github.com/uniswap/interface


I have made a lucrative career by porting fragile, slow, bug-ridden react-naive disasters to native code bases. There is a lot of demand for this from startups that took the cross-platform shortcut and the MVP became the product.


You can make a disaster in any framework. SwiftUI is a mess, for example, and slow.

React Native took a while to mature, but with the right tooling you can ship amazing UX now.

I don’t doubt there’s a ton of crap out there.

But you’re wrong if you think you can’t make seriously great stuff with it. It’s matured quite a lot.

And the React programming model is untouched, hot reloading and dev tools far ahead, and code share is worth it with something like Tamagui that actually optimizes to each platform. If I never had to touch an ObservableObject again that would be great.


It can get tough with the native dependencies involved.


I have made a countless PRs to many of the most popular react-native dependencies because they were a buggy mess.

In fact at this very moment I’m helping a team fix a memory leak/crash in the “react-native-permissions” dependency. It’s obvious this package was not written by someone with experience. All it does is request permissions in a paragraph of code and it’s totally broken! Give me a break


I have plenty of nightmare stories to tell you about native deps.


For some apps, I can see this. Question is, did the startups regret taking the shortcut upfront, or were they fine paying later for the improved version?

Btw, sometimes I think about how much I've been paid by various people to move a backend from SQL to NoSQL then from NoSQL to SQL, despite me telling them not to.


Btw I don't think of it as a shortcut, I think it's actually the ideal end-state.


When I was still doing contract work I rescued a bunch of native code iOS app disasters. For most apps cross platform solutions are fine.


Like you dont have to know native components anyway?

In one way you centralise as much logic as you can and are encouraged to write clean code that doen't depend on platfrom quirks. In the other way you... give up and just do whatever.

I can see how some devs find it hard to not give up and just write the same logic in multiple languages, great job security!


I can think of a few others where you have to do that; most of them are the kind of languages whose fans say they're impossible to write bugs in.


Rust is hard but I've never had the compiler just throw up its hands and tell me it's up to me to figure out what's wrong.


That's not the one I was thinking of.

https://anthony.noided.media/blog/haskell/programming/2020/0...

Something like Idris or Coq would have even more complex messages, though I don't have an example on hand.


Ok but these are mainly academic research languages. Swift has the backing of the most valuable company in the world and is what they're pushing as the right way to develop for their platform.


Haskell is definitely a real industrial language!

Many of the other languages in the formally verified/dependent type space are academic, but there's government interest in things like Ada too because they don't want their planes to crash. Couldn't say how good its error messages are though.


I've seriously used Erlang for a while, and Haskell looks kinda similar. Ingenious ideas there, cool features, but in the end it's cumbersome to use. So I can see why these are niche and wouldn't consider them next to big ones like Swift or C++.


If Rust is one, yeah I have to fight that compiler but it's because it's doing its job and not letting me do invalid things. Not because the compiler has some feature "not yet implemented" or has bugs.


Also, is anyone familiar with the weirdness with tuples in Swift? All I remember is they never worked the way I expected, and for some reason something in our code specifically worked for tuples up to size 5.


Swift only got variadic generics fairly recently, and before that you couldn’t write code which was generic over tuple size. Instead you had to codegen versions for each size of tuple you needed to support.


I think that was it. There was also something about tuples inside of dictionaries that Swift 1 or 2's compiler segfaulted on.


> Ridiculous, permanent, tech debt such as hardcoded compiler exceptions are permanently living in the compiler codebase.

A little search-engining didn’t surface info about this, could you point me in the right direction?



Thank you - yeah, that smells pretty bad. But, although it says "can't be changed", could it not be changed if SwiftUI were changed? An up-to-date compiler would no longer be able to compile projects using older versions of SwiftUI, but is there not a path to get there by updating SwiftUI to not require this exception, then some time later, deprecating the older versions of SwiftUI, either throwing a warning on compilation or putting it behind a compiler flag, and then finally removing it altogether?

It would certainly take a while, but could be done if Swift is aiming to be a mainstream, long-term, cross-platform language - unless this kind of thing is just not done, or there is some technical aspect I am missing?


I get what you’re saying and largely agree, but without result builders SwiftUI wouldn’t exist, let alone “look pretty”. You seem to be devaluing the syntactic sugar it brings, which in this case makes a massive difference.


That is a fascinating take and approach to contemplating a language.

Can you think of any other languages that share a duality like Swift? I mainly play in the python ecosystem, but I am looking to branch out and really learn a compiled language. What you wrote about Swift makes sense and would be concerning if I had recently picked it up.

"Yadda, yadda..." regarding picking the right tool for the job aside, I don't want to waste time on a language that can be usurped by a giant corporate borg anytime they see fit.


> I am looking to branch out and really learn a compiled language

Tip from a friend - just learn C++. It's not sexy, it's not new-fangled, but it's rock solid, runs half the world, and you will never struggle to find a job.

You'll also understand what it is that all of the new-langs are trying to improve upon, and will be able to made an educated judgment about whether they succeed or not.

A good resource for modern C++ (albeit it seems to be down rn?) is https://www.learncpp.com/. I'm not affiliated, it's just good.


C++ has a lot of things I would call new-fangled, in addition to many old ways of doing things, with no good ways to settle on which iteration to use so devs can avoid learning all of them. And some things simply require templating nightmares to work at all.

I would also not use "rock solid" in comparison to how easy it is to hit undefined behavior.

Used all over and easy to find jobs, yes.


C++ gives you a garage full of tools and lets you decide what to do with them. Ultimately, it does this because the years have shown that different people need different tools for different use cases. Few people need or use them all at once. But when you do need a specific tool, C++ has it ready for you. Some consider that a con, I consider that a pro.

I find that a lot of the newlangs take the view that since most programming only uses 20% of the toolkit, they can just dispense with the remaining 80%. Which is great, until you discover that virtually every sophisticated project needed one of those things from that remaining 80%, and they all needed a different one. A nice language for writing 'Hello world's isn't going to cut it. And so either the language insists on its simplicity and stagnates, or it just reinvents all the complexity of C++.

At which point, you were better off just taking the garage full of tools that you already had, rather than going all in on some newlang, getting blocked, and stalking a github ticket for years waiting for the lang to get that feature you need. (If you spent the time in C++ instead, its full garage wouldn't look so imposing over time!)

What's the famous quote? 'There are only two kinds of languages: the ones people complain about and the ones nobody uses.' :P

Re generics, aren't C++'s virtually the same as Rust's? Especially now that C++ has 'concepts'?


There's a lot of redundancy and things you probably should never use in C++, though. It's not complexity that needs to exist other than for backwards compatibility.

> Re generics, aren't C++'s virtually the same as Rust's? Especially now that C++ has 'concepts'?

I'm not worried about generics when I talk about template nightmares, that's more about rvalue and const overloads and vararg templates and careful manipulation of SFINAE, all coming together.


You don't have any template nightmares after C++ 20 concepts. The error messages improved so much that I thought I was using a different programming language.


> There's a lot of redundancy and things you probably should never use in C++, though

This is just a symptom of its age. C# has this same problem, too, and it's only getting worse.


C# does have C++-esque issues but the scale at which they affect the language is not comparable. There are really no popular patterns that have become "no, never do that", poorly written code is sort of version agnostic, and even old code, particularly one like in Mono applications from back in the day, plays really nicely with modern runtimes. And the difference in ways to achieve something is almost always of type "remember this required to take 4 steps? now it's one shorthand thing that makes sense in retrospect".


Something I’ve been curious about recently, is how did Linux get away with straight C for so long, considering how complex of a project it is. Did they end up reimplementing a bunch of C++ features?

Actually, regarding sophisticated projects, there’s quite some complicated projects that succeed without C++ power, like Postgres and Python.


I didn't mean to imply, if I have, that C++ is always and in all circumstances the best choice for any given software project.

The question was about the first compiled language someone should learn, and for that, C++ is great. It's going to cover most of the use cases for compiled languages, while providing relatively familiar abstractions and semantics.

C is fantastic when you need to eke out every single cycle of performance, which is why it's a great choice for Python and Postgres. But you do this by effectively writing assembly instructions, which is why it's a terrible choice for someone coming to compiled languages for the first time.

C++ gives you equivalent performance to C, for a fraction of the effort, in about 90% of the use cases. For the remaining use cases, C is a better (and often only) choice, but no one who is only learning compiled languages for the first time is anywhere near being able to write that kind of C code. (After a few years of C++ they'll be in a much better place to do so!)


I’ve been meaning to do this for years, and just played around with rust a bit (liked it, but the wrappers around some c++ stuff i wanted to use were half baked). Learning rust, there was this “rustlings” thing [0] that was a set of guided exercises to go alongside the rust book. Fastest I’ve ever picked up a language, thanks to that. Do you or anyone know anything similar for c++?

[0] https://rustlings.cool/



I use C++ at work but am glad I didn't learn it before. Yes it's a good language for what it's made for, but there are so many features that anywhere you work, you will use it differently from how you used it before. Better to learn it on the job imo.

Just getting good and greasy with Python and JS with all the typical libs has been more rewarding for me. Nobody taught me, but it was useful.


> Tip from a friend - just learn C++.

Is that good advice for certain domains only? For example, you likely wouldn't want to use C++ for web server backend? You could, but may not want to.


Definitely. Use the right tool for the job.

But if you're looking to learn a compiled language - presumably because you want to write applications, games, or systems - then C++ is a great one to learn.


C# for Microsoft and for a long time people were afraid of Amazons influence on Rust

But the reality is that many major languages already have very outsized corporate influence. Either at the language level or the compiler level.

Swift is open source and has been separating from Apple ownership as of this year.


I can certainly think of a web browser that has that same conflict of interest... in fact we are actually right in the midst of a "leopards ate our face" moment with ad blocking becomes undesirable to Google in Chrome.


Java has problems now that Oracle is in charge. upgrading Spring/Hibernate versions is really painful when you have to switch to Jakarta imports all over


Without Oracle in charge Java would have died in version 6.


You can apply the same complaint to Java and .NET ecosystems, both doing quite well, despite not everything being open source, or being managed by entities that FOSS folks rather not deal with.

"Oracle, IBM, Azul, AWS, Google..... this and that."

"Microsoft this and that."


I was always puzzled by the swiftUI syntax. Thanks for pointer me to result builders, I understand better now. I can't help but thinking the whole thing could be re-implemented with macros now? (result builders appeared in 5.4, macros in 5.7)


Was Swift really necessary? Would it really not have been possible to build iOS apps on something like JavaScript?


Apple never set out to create Swift. Chris partner worked on it for fun in his spare time, and when he realized it could go somewhere he went through the channels to get it turned into an official project. With that being said it was tailor made to address important issues for Apple developed. For example, it has great interop with Objective-C which was a requirement to get it adopted, first at Apple, and then with the wider community. It is also built with safety and security in mind removing lots of undefined behavior and adding null safety. It is also statically typed while JavaScript isn’t. There are a whole host of other goodies that it has that make it better for Apple developers than just adopting JavaScript. Chris partner has lots of talks where he goes over the reasonings


>Chris partner

Chris Lattner


It wouldn't be performant enough. Resist the urge to go as abstract as possible on a battery-powered device someone else is paying for.


A low level language that's ergonomically interoperable with objective-c was probably the best choice.


I'm still fairly new at Swift and like it "OK" so far. One thing that I find particularly annoying however is how you very often run into "Compiling failed: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions" when using SwiftUI...

It's just shocking to me that the compiler is asking me to help it out.


There is an article about this based on a Chris Lattner talk/links to previous HN discussions: https://danielchasehooper.com/posts/why-swift-is-slow/


Wow. I think sometimes you just don't know how things are going to turn out until you use something "in anger", as they say here in the UK. "Let's try using this fancy type-checker in a production language" is reasonable. "Let's overload + so you can combine things easily" is reasonable. "Let's allow you to express different kinds of types as a string" sounds pretty cool too. How could you know the combination of all of those would change a simple expression with an invalid type in it into a 42-second compile time with a cryptic error message, except by actually doing it?

EDIT: Meanwhile, over here in go-land, I just got annoyed that adding a new external authorization library took the average compile time of my 20k project from 7s to 9s.


This is absolutely wild.


I think SwiftUI is actually causing Swift a lot of reputation damage. I rarely see the issues common to moderately complex SwiftUI views when writing Swift with code-only AppKit/UIKit, for a CLI program, etc.


I jumped directly into SwiftUI for a few macOS apps I published, and the weird performance issues / UI lag / Compose complexity for even simplistic apps were hard to navigate, even after a year.

Yet rewriting components in AppKit/UIKit feels regressive since even Apple's docs migrated to SwiftUI, but sometimes that was the right answer to regain some control over what was going on.

Though some of that was limited to macOS which seems to get a lot less attention from Apple than iOS, or because you're naturally building more complex layouts on macOS. But it always makes me appreciate how much better the kit is for building web frontends.


Most of the layouts I've built on macOS are actually simpler than their iOS counterparts because the mode of interaction is so different. Getting data onto the screen in a somewhat natural way is usually less of a contorted process because one doesn't need to deal with a software keyboard, a narrow viewport, lots of scrolling, etc.

The problem with SwiftUI on macOS in my opinion is that it's just not well-suited to the types of layouts common on desktop and especially macOS. It's best at uniform grids and lists which is more of a mobile thing. On macOS you want things like visually balanced whitespace, optical centering of labels+controls in forms, etc which are often rather arbitrary and exactly the thing that SwiftUI is worst at.


All this magic being baked into the language (as opposed to the libraries/ecosystem) is why I abandoned being a native app developer professionally


Eh, I'm not too bothered by it. SwiftUI has its issues but I've put the result builder feature powering it to use for things vastly more simple than SwiftUI, for which it works very well. It's a net positive overall and I prefer it to having to rope in libraries for every little thing.


I've heard about SwiftUI just some months ago, and was initially excited about the demonstration in WWDC, so I started to pay more attention to it. Now I've been reading comments on how it doesn't handle well more complex UIs, but haven't really read anything in depth. Can you recommend any resources on that?


I use it full time, I’d say if you’re targeting iOS17+ you’re probably good. On macOS it’s completely busted still and I wouldn’t bother with it. The future of mac apps looks even bleaker than it did before.


SwiftUI errors in general are mostly unhelpful.

It will go so far as to suggest a change in a thing only for the developer to find out the cause was completely unrelated or even in a model in a different file.

Helpful error messages for SwiftUI would go a long way


Admitting SwiftUI to have been a mistake and reinvesting back into AppKit and UIKit would go a long way.

In what other industry does anyone use text to create visual layouts? We still do drag and drop to design the layouts anyway, followed by re-creating the drag and drop version... using text...

It's all quite maddening if you try and make sense of it.


I think a text-based, declarative rendering API is the right choice. This approach has been vindicated on the web.

But the actual implementation of SwiftUI is terrible. Long compilation times, cryptic compiler error messages, bad performance, and a baked-in two-way binding observable state API that's great for code snippets and WWDC talks but makes building a coherent state model in a real app very difficult add up to a big mess.


iOS and macOS had a visual editor forever. Interface Builder was a really interesting take on visual layout, instantiating real objects in the interface and serializing them into nib / xib / storyboard files ready for use in your app.

Most developers I know in the ecosystem tried and tried to like the visual editor, but ended up falling back to code. Why? Source code is pretty great. It’s easy to read, write, share, diff, and organize.

SwiftUI has, again, a very modern and interesting take on visual editing. The code is the truth, but there’s a rich, realtime preview canvas that makes rountripping possible. For reasons, it’s unfortunately all but unusable for a lot of folks with complex Xcode projects, but it’s a very good idea.

In summary: Visual layout editors and text are both pretty great. The devil’s in the details.


I find that Interface Builder, while not great for editing XIBs on iOS and storyboards on either platform (it's slow and buggy in those cases), is actually still fairly pleasant when working on Mac XIBs. My go-to is pure code UIKit on iOS, but I still reach for XIBs frequently for Mac stuff.

If IB's performance and stability issues could be fixed, I think its biggest problem is the near-unintelligible spaghetti XML that it generates for XIBs and storyboards, which is a huge pain for VCS and frequently cause of merge conflicts (which must be manually corrected). If they just cleaned up the XML output or switched to something more friendly to manual editing that'd help a lot.


Android Studio (before Jetpack Compose) split the difference and offered easily editable/readable XML with textual autocomplete AND a visual editor. I much preferred that after having spent years working in Xcode's Interface Builder.

xibs and storyboards were also XML but IB would insert attributes that had no visual impact, move lines of XML around simply upon opening the file, or randomly change x/y values of the viewport upon opening the file, or update float values to use 15 decimal places instead of 2, and so on. Apple's insistence on making the visual editor have priority over text led to a worse developer experience as teams grew.


You just gave me flashbacks to 1.0 in 2019


In my case, Metrowerks CodeWarrior student edition in the late 90s, when I was going though "C for Dummies" in highschool.

Miss a semicolon, every line after but not including the one with the error.

ResEdit was better in the 90s than SwiftUI was in Xcode last year. Hoping to find that standard will be re-achieved when I install the update…


> the compiler is asking me to help it out

Yeah, this sucks, but it's also mind-boggling. The SwiftUI body result builder returns "some View" which means the compiler has to infer the specific type that the function produces, as a complex combination of whatever different if/then and while loops, along with their transitive function applications and closure contexts (where the type inference can be outside-in instead of inside-out).

Then layer in in the @Observation macro magic for observing data types and re-building views when data changes, Swift Data pulling with fully typed predicates from a data store automatically replicated across devices....

It's like rocket thrusters from every angle, and pretty easy to start tumbling off the happy path.


Saw this on a video-stream while the poor coder was struggling..you could see his BP rising.


Swift 6 is a major leap forward for cross-platform batteries-including development IMHO. Foundation is the big win here, because it provides so much out of the box.

Swift 6 is the first version with enough low-level improvements and cross-platform capabilities to make me curious if the Swift team is trying to aim for long-term replacing C, C++, Rust, Zig, etc.


Long-term replacement of C, C++, and Objective-C on Apple's ecosystem has been on Swift documentation as mission statement since day one.

I don't get how people still get surprised by this.


It’s too slow. I can’t stand waiting for builds on what I would call a medium sized project.

I haven’t given 6.0 a full effort yet, but a couple test compiles without any tuning showed only incremental improvements.


Been using Swift since the pre-release announcement that it existed.

O'Leary's Law of Swift Comments on HN, much like Betteridge's Law of Headlines, says the answer to Swift speculation on HN is always no.

If there is any global aim, it is to migrate internal teams still working in ObjC/C++ to Swift.

If this sounds close-minded, it's worth talking with an iOS developer 1:1 and get info on the day-to-day experience of SwiftUI 5 years in, and concurrency/actors 3 years in. They have a lot of fires to fight to keep the house standing.


> Apple is not trying to potentially attempt to aim for maybe replacing every other programming language with Swift

Apple's stated goal is to make Swift a viable language up and down the technology stack, which goes way beyond Objective-C. They are actively working on making Swift viable in baremetal environments, firmware, drivers, etc. IIRC they even referred to it as a C++ successor language at WWDC this year.

I agree that they're not trying to "replacing every other programming language," but they're investing in it becoming a viable language choice in most environments.


Which I think that makes sense for them to want it to be viable all over the stack. It also directly benefits them also. I am assuming a lot of the motivation for embedded swift is being able to use it in their own stacks. I think even the presentation for WWDC gave examples of it being used in some pieces of hardware like on the AppleTV?


John McCall (Swift team) gave a talk on Swift replacing C++ earlier this year: https://www.youtube.com/watch?v=lgivCGdmFrw

Selfishly I wish mixed C targets were a priority before C++ but I get why this was more important to them


> they even referred to it as a C++ successor language

Nice!

I'd gently warn against parsing that too closely: having been in the community and the weeds starting in 2014(!).

ex. the cross platform Foundation was announced and open sourced in 2016.

I'm sure a lot of things were upgraded or more consistent and its better etc., but it's a painful to remember what was expected + communicated at the time, and how much later it is.


I think the goal of Swift on the server, or embedded Swift, is mainly to offer Mac and iOS developers a way to write everything in the same language.

Right now, my Mac app depends on a few simple web services that are written in a different language. It would be neat if those services could be written in Swift, so that I could just use the language I already know instead of relearning how to do decode JSON in Python/Ruby/PHP.

Swift on the server doesn't have to become widely used. As long as there is a simple way to write basic web services, it is useful.


> It would be neat if those services could be written in Swift, so that I could just use the language I already know

Then somebody else would have to now learn Swift even thought they write in different language like: Android Developers (Kotlin/Java), React Native devs (JS/TS ), Windows Devs, Linux Devs etc. As long as Apple don't invest more in official cross-platform tooling Swift is not gonna be mainstream (even though I like Swift). They have to bless other competing platforms.


I was more thinking of smaller teams or individual developers that are Apple only.

If you have a bigger team, then it doesn't matter as much, because you have different people who do the iOS app, people who do the website, people who do the back end etc.

Swift on the server doesn't have to go mainstream to be useful. If you just need some basic web service that syncs highscores or verifies in-app purchases then it would be neat if you could write that in the same language as the app itself.


> They are actively working on making Swift viable in baremetal environments, firmware, drivers, etc.

I don’t think Swift has a place in these niches FWIW. Writing low-level Swift code is very verbose and unnatural. Personally I just don’t think it’s practical to have a single language excel across the stack.


Apple says a lot of things at WWDC. Not all of them are entirely honest.


Do you have any evidence they’re not targeting it as their main everywhere language?

It’s already been used in their libraries, their OSes, and even firmware on embedded processors.


I think this conversation has a lot of parallels to their public views on SwiftUI. Are they working on it? Yes. Will it maybe end up being what they consolidate around sometime in the future? Maybe. Does that mean you should believe them when they say it’s the biggest thing you should invest in right now? No.


They’re pretty clear on both. UIKit, AppKit, and Objective-C(++) aren’t getting new features except in rare cases.

This is the way forward. And they’re dogfooding it. Even in some of their embedded processors.



Huh?

Objective-C was in the kernel.


In NEXTSTEP, yes, but not macOS.


And they regretted the decision not to do it in macOS.


Was it?

Objective-C requires a runtime. I thought the kernel was always C/C++.


NeXTstep 3.3 supported Objective-C drivers in the kernel. Mac OS X's IOKit changed them to use a (poorly defined/maintained subset of) C++.


Wow that’s an interesting decision. I didn’t know they went that far.


At the time they thought it would be too slow. In retrospect it would've been fine, I think. IOKit is a pretty strange subset of C++, but at least it doesn't use all those templates.


That was my immediate thought, speed issues. MacOS isn’t exactly an I/O speed king as it is. I suspect if it had made it in it may have been removed by now.


Nope, the other way around.

It was in NeXTSTEP. It was taken out because of performance concerns that turned out to be entirely unfounded. Alas, when they figured out that this was the case it was too late.

A similar example is CoreFoundation. It was a C implementation of the OPENSTEP Foundation. To this day, even really smart, capable and knowledgeable engineers believe that CoreFoundation is faster than the pure Objective-C Foundation. It is not. It is significantly slower. Often several times slower.

(It is of course very slightly faster than the "new" Objective-C Foundation that was implemented on top of CoreFoundation).

Another example is Swift. People believe it to be generally faster than Objective-C. Again, this is not the case. Swift code tends to be significantly slower than comparable Objective-C code.


"At the time" being when they turned NeXTSTEP into OS X / macOS (by way of Rhapsody / MacOS X Server).


It was based on Embedded C++. Definitely inspired by Objective-C but I believe they wanted to avoid dynamic dispatch in the kernel.


Yes it was. I actually wrote a kernel driver in Objective-C for an EISA board on OPENSTEP.

The runtime support that's required for Objective-C is extremely minimal. In the end you need objc_msgSend() and some runtime structures so that function works.

The methods themselves are just C functions with slightly funky names.


There is code in the kernel to enable treating types like `os_log_t` and Clang blocks as Objective-C objects higher up the stack, but the XNU kernel itself is almost entirely C and C++.


Yeah, on macOS.

On NeXTStep / OPENSTEP kernel drivers could be and were written in Objective-C.

https://www.nextcomputers.org/files/manuals/nd/OperatingSyst...


I feel like the whole issue with c++ is that in order to create a language able to do everything you more or less have to include a feature set that is 'unharmonious' to say the least.

To me it kind of feels like swift's place here is going to replace the subset of c++ that apple is mostly interested in. But not necessarily the c++ that they aren't and then likely not the c++ that rust et al are able to replace (although I guess we'll see what they have in mind as time goes by).

I suspect they'll be disappointed if they try to replace c++ in totality. As the end result will likely not be particularly habitable.


> I suspect they'll be disappointed if they try to replace c++ in totality.

Why is that? That is their aim, I think they’ve made it clear.

They’re writing all-ish new code in Swift (not always practical in existing code) from what I understand. They’re already using it for the code that runs in places like the Secure Enclave or other embedded processors.

Can it replace C++ 100% everywhere today? Probably not. I don’t know enough to know why. But that absolutely appears to be the goal.


Any idea if compiling for android ( at least the data part) will now be supported ?


I recently wrote about using a native Swift toolchain for Android: https://skip.tools/blog/native-swift-on-android-1/


If if it does, it isn't like using NDK and going through JNI for anything useful outside game development is that great experiece.

People keep doing that, because I guess they hatred for Java and Kotlin is higher than the pain of using NDK.


The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close. The ecosystem is amazing, the language itself is awesome and follows sane rules, you can chose between native code, JVM, WASM, or JS-IR, or all as backends to compile to (for the exact same code) depending on your use case. Compose Multiplatform is also wonderful and by far my favorite UI library now and I’ve shipped a fair bit of code with it. It’s a single UI library that I can share my UI across every platform (including iOS and android), and even seamlessly interop with native widgets on platforms like iOS if I need to, like the camera viewfinder for example.

Kotlin’s real strength is the compilers ability to target as many different backends as you want and cross compile into any platform you want. I have an app in production that save for about three lines of swift that declares the view controller for iOS specifically, shares its entire codebase all in kotlin between Android, iOS, Mac, Linux, and Windows. I could add a line to the build file and add web as a target and turn it into a web app.


The comment you are replying to was focusing more on the low-level features of Swift for systems programming (hence comparing it to systems languages). Kotlin is cool but it is not a system programming language and the native code compilation from Kotlin is not aimed to be.


That earlier comment didn’t actually mention “systems” at all (although it does include “low-level”).

I feel like “systems programming” is getting increasingly ill-defined these days, anyway. If Kotlin and Java aren’t allowed, how about Go? Are you ruling out all languages with garbage collection?


A basic litmus test I use for systems programming languages is: “can I imagine the Linux kernel including this language one day?”

Currently that’s C, Rust, and some assembly.. obviously there are more languages used for systems programming tasks too. Memory management, concurrency, and low-level access are also super important in a system language. How would you define a systems language?


C++ is sobbing in a corner I guess


C++ would sob in a corner, but the Sob static object was instantiated before the Tears static object because the link order changed and so there's a dangling reference and nothing runs any more.


Well deserved upvote.


I think systems programming, like system tools or cli tool for example can also be done with a garbage collector e.g. in Go or Ocaml. For low-level development you would usually expect that the program manages memory itself, not some garbage collector. This is not black and white, some approaches of reference counting for example might be suitable for some low-level development. Or in some languages you can choose like Nim, D or if I understand correctly, now in Swift ?


> That earlier comment didn’t actually mention “systems” at all

It doesn't really need to mention "systems" specifically, since the comparison to other common systems languages, in addition to the words "low-level" implicitly imply systems programming languages.

Usually, languages with GC will not be considered contenders for systems programming.


You're right to note systems programming doesn't involve GC usually


> The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close.

I've been using C#/.Net lately and I've been very impressed. Very large ecosystem and community. Perhaps larger than Kotlin. And you are not stuck with one IDE from one company.

Microsoft also has dotnet MAUI that sounds similar to Compose Multiplatform. I have not used neither so I can't speak to any strengths or weaknesses in the UI libraries.


I’ve gotten the impression that MAUI might be a dead-end, with MS pushing Blazor instead.

I’m amazed people like C# so much. I think it really shows its age when you compare it to Swift or another modern language.

Some things I’ve been frustrated with:

- throwing isn’t a contract made at the function level. Anything can throw, and you can ignore it if you want. And the APIs are inconsistent because of it

- nullable types and default values are weird and seem like two solutions to the same problem

- Blazor bindings are very boilerplate heavy

- hot reload doesn’t work most of the time, and iteration times are bad compared to every other stack I’ve used


Kotlin isn't bad, but having written plenty of both it and Swift, I'd rather write Swift if I had the choice. Their syntaxes share a lot of similarities, but subjectively I find that Kotlin is less ergonomic, not as conducive to code that reads smoothly, and is finicky about odd details in comparison. I'll take SPM over Gradle any day too.

On the more minor side of things, Swift's built in JSON serialization is super handy. It's really nice to not have to increment the dependency counter to parse JSON and makes spinning up projects that much faster.


Kotlin Native's choice to go with a GC over native memory management is my biggest issue with it and really limits its use for memory and performance sensitive use cases.


> The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close.

Kotlin is nowhere close to Java in this comparison. For all intents, there is just one realistic IDE for Kotlin. And realistically only one build system. And the community is very small. Java, by comparison, has many IDEs to choose from, many build systems, and a very large community to answer questions.

K


I've been using Flutter and Dart for multiplatform apps, as I found Compose Multiplatform to be far behind Flutter thus far in support. Dart also compiles to a VM, WASM, JS, and native code, although not the JVM but I don't know how useful that is if you already have the previous three.


With safe concurrency and typed throws, Swift is starting to look a lot like a friendlier Rust to me. Honestly pretty excited to take a look at it again, though I doubt it will become my daily driver due to the smaller package ecosystem. Hopefully cross-platform Foundation is a step towards improving that though.


> Swift is starting to look a lot like a friendlier Rust to me.

That’s what i thought and rewrote my cli util in swift. Ran great on macOS, tried to build for windows and found out there’s no well maintained, actively developed http server for windows for swift.

Dont let these wooing crowds fool you


Ha! That seems to answer my next question: what’s its story for web development? Does it have one?


You can't compile for Linux from XCode (defacto IDE for all things Apple) and all web dev runs on linux.

If you like having an IDE instead of scrolling multi-page compiler error dumps in your terminal window - this is a complete non-starter.

The leading Swift web framework (Vapor) suggests you use Docker to build for Linux. I gave it an honest try - their empty starter 'hello world' web server takes more than a minute to compile. Ok, but surely it'll be faster after I make a one liner change? No - their docker workflow has 0 compiler caching - you'll be waiting more than a minute every time.

Complete non-starter.

I ended up installing a VM, installing the swift compiler and that only takes 2-3 seconds to re-compile a 1 liner change (in a print statement, in an empty project, lol). Consider me very deeply unimpressed.

By comparison - a visual studio code + docker + python/ruby/javascript setup is a well oiled, working machine.


You can install other toolkits for Xcode. There’s even an aws toolkit.


Is it possible to write code in Xcode, press compile and have the debugger show me where an error is when compiling for linux.

If yes, please show me the way because I've failed and I've given it an earnest try.


You can use LSP?


> story for web development

Under the hood, Swift-NIO and async Swift is a pretty powerful basis for writing performant servers. Aside from Vapor, there are other small/fast containers like hummingbird.

Not mentioned (surprisingly) is Swift support for wasm/wasi, for deploying code directly in the browser.

Also, "some say" that macros could revolutionize both static and dynamic generation by moving a good portion of site generation to compile time. I'm not aware of any libraries realizing that promise yet.

Finally, Swift concurrent actors have supported distribution for some time, so you can have all-Swift distributed systems, where the client code works with both local and remote servers.


Vapor works great on Linux and macOS. Haven't tried Windows (pretty much only run Steam there these days)


For fast web servers, you could use .NET, especially if you care about Windows. It gives you good ecosystem and consistent experience across all platforms. Even FreeBSD support has improved considerably as of lately. It is already built on top of what Swift, Java and others call "NIO". In .NET it's just "SocketAsyncEngine" users will never know about unless they look for it :)


Did you look at Kotlin?


Swift does not use virtual machine and garbage collection, it competes more to c++ and rust and if Apple is serious about pushing it cross platform that's definitely a welcome move, in fact, I can't wait even though I have never programmed in swift. the main point is that, it's memory safe, and seems much easier to code than rust.


> garbage collection

Reference counting is garbage collection, and it performs significantly worse from a throughput perspective than tracing GC, which is possibly the most common metric for web server type workloads.

It really is not nitpicking, we should just really use tracing GC when we mean it.


There is kotlin native, which generates native code, using the same llvm that c++, rust and swift use. It doesn't have to use virtual machine, it is just one of targets.


not sure if it is 'production ready' and how does its performance/size go comparing to c++/rust/swift, in the end though, it's the ecosystem that matters.


> garbage collection

is reference counting not considered a form of garbage collection?


Nope; no having to pause execution and clean up. Miguel de Icaza (the creator of Mono) explicitly mentions this as one of Swift's key strengths over GC languages like C# during a talk about Swift at GodotCon 2023: https://www.youtube.com/watch?v=tzt36EGKEZo&t=7s&pp=ygURc3dp...


Miguel naturally wants to sell Swift for Godot story.

Also some the Mono issues were related that they never had Microsoft's budget for implementing a bleeding edge runtime.

From Computer Science point of view RC is and will stay a GC algorithm.

https://gchandbook.org/contents.html

https://sites.cs.ucsb.edu/~ckrintz/racelab/gc/papers/levanon...

https://sites.cs.ucsb.edu/~ckrintz/racelab/gc/papers/AzaPet-...


Maybe he should then read a book on garbage collectors that all start with ref counting..

Also, is it “pause execution and clean up” together? As ref counting obviously has to clean up, that’s the whole point - and it actually does so by blocking the mutator thread (the actual program written by the user). Then we didn’t even get to the point where syncing counters across threads are possibly the slowest primitive operation a CPU can do, so if we can’t know that an object will only ever be accessed from a single thread, ref counting has plenty shortcomings. Oh also, nulling the counter in case of a big object graph will pause execution for considerable amount of time (particularly noticeable in case of a c++ program exiting which uses a bunch of shared ptrs)


Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former, to the point where the fact that ARC does not have GC pauses does not help if the whole things is multiple times slower, in many ways it's like Go's """low-pause""" GC design discussions that completely ignore allocation throttling and write barrier cost.

Swift lacking proper performant GC is a disadvantage. Upcoming features solve it by likely enabling more scenarios to sidestep ARC, but their impact on the Swift as a whole, and user applications that use them, is yet to be seen.

It's important to always remember - there's no free lunch.

I'm sad that Miguel de Icaza seems to have a bone to pick with C# nowadays, but it's not surprising given Xamarin story.


> Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former

By which you mean "less CPU cycles on a desktop machine with plenty of memory"?

That's not when ARC is more performant; it's better on smaller devices that are under memory pressure and have swapped out some of your memory. In which case you have to swap it back in to go scan for pointers. And if you're a low-priority daemon then you evict higher priority pages in the process.


Perhaps? You assume GC takes unreasonably more space. It's purely a function of a tradeoff between running it more frequently, tuning heap sizing algortithms, choosing to run them as part of allocation calls on the same thread, sacrificing throughput in the process. GC can be more compact than what you assume. Modern good GC implementation are precise and don't have to mark dead GC roots as live, even within a scope of a single method. .NET and I assume Java GC implementations work this way - that's what "precise" means in "precise tracing GC".


It's not that it takes more space, it's that it has to read memory more often. Not all memory pages have the same cost to read.

Most memory swapping on most people's home computers is from web browsers for this reason; it's part that everyone uses them, but it's also because they're running JavaScript. And they're pretty well tuned, too.


> it's that it has to read memory more often

Wait until you learn about "reads become writes with ARC" :)

ARC as implemented by Swift, on top of ObjCs retain and release, is design that has an advantage in being more simple, but at the same time worse at other key aspects like throughput, contention, memory traffic and sometimes even memory efficiency. Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.


You're lecturing me about my job here. I don't need to learn nothin'.

> reads become writes with ARC

That's not a big problem (it is a problem but a smaller one) since you can choose a different tradeoff wrt whether you keep the reference counting info on the same page or not. There's other allocator metadata with the same issue though.

A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

> Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

It was Objective-C that had the GC (a nice compacting one too) and it failed mostly for that reason, but has not come back because of the performance issues I mentioned.

> Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.

Oh, people definitely want to use it in the same places and will if you don't stop them. See how everyone's writing apps in Electron now.


> A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

Moving GCs solve it much more elegantly, in my opinion, and Java is just so far ahead in this category than anyone else (like, literally the whole academic field is just Java GCs) that not mentioning it is a sin.


> literally the whole academic field is just Java GCs

Not necessarily a good thing. While reading Java-related papers I found myself constantly thinking "damn, they wrote a paper for something that is just 2.5 smaller pull-requests in dotnet/runtime". I wouldn't put the modern state of academia as the shining example...


What are you even talking about? C# has a famously simplistic GC which is basically one big, 1000 lines file. C# has very different tradeoffs compared to java, it pushes complexity to the user, making their runtime simple. Java does the reverse, having the language very simple, but the runtime is just eons ahead everything else. Like, call me when any other platform has a moving GC that stops the world for less than a millisecond independent of heap size like ZGC. Or just a regular GC that has a similar throughput as G1.


Historically, at its inception, .NET's GC was written in LISP and then transpiled to C++ with a custom converter. It is still a single-file implementation, but I'm afraid it's not 1000 but 53612 lines instead as we speak :)

Well, that's not one file per se and there is more code and "supporting" VM infrastructure to make GC work in .NET as well as it does (it's a precise tracing generational moving GC), so the statement that it pushes complexity onto the the user and underperforms could not be further from the truth. None of the JVM GC implementations maps to .NET 1:1, but there are many similarities with Shenandoah, Parallel, and some of the G1 aspects. In general, .NET is moving in the opposite direction to Java GCs - it already has great throughput, so the goal is to minimize the amount of memory it uses to achieve so, while balancing the time spent in GC (DATAS targets up to 3% CPU time currently). You also have to remember that the average .NET application has much lower allocation traffic.

In addition to that, without arguing on pros and cons of runtime simplicity (because I believe there is merit to Go's philosophy), .NET's CoreCLR implementation is anything but simple. So the statement does not correlate to reality at all - it makes different tradeoffs, sure, but together with CIL spec and C# design it makes historically better decisions than JVM and Java which lend themselves into more naturally achieving high performance - no interpreter stage, only intermediate compilations have to pay for OSR support, all method calls are non-virtual by default, true generics with struct monomorphization and so on and so forth. Another good example of the runtime doing truly heavy lifting on behalf of the user are byref pointers aka 'ref's - they can point to _any_ memory like stack, GC heap, unmanaged or even device mapped pages (all transparently wrapped into Span<T>!), and the runtime emits precise data for their tracking to update them if they happen to be pointers to object interiors without imposing any measurable performance loss - it takes quite a bit of compiler and GC infrastructure to make this work (exact register state for GC data for any safepoint for byrefs, brick tables for efficiently scanning referenced heap ranges, etc.).

List of references (not exhaustive):

High-level overview (it needs to be updated but is a good starting point): https://github.com/dotnet/runtime/blob/main/docs/design/core...

Implementation (the 53612 line file): https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...

.NET GC internals lectures by Konrad Kokosa (they are excellent even if you don't use .NET): https://www.youtube.com/watch?v=8i1Nv7wGsjk

Articles on memory regions:

https://devblogs.microsoft.com/dotnet/put-a-dpad-on-that-gc/

https://maoni0.medium.com/write-barrier-optimizations-in-reg...

https://itnext.io/how-segments-and-regions-differ-in-decommi...

Articles on DATAS:

https://github.com/dotnet/core/blob/main/release-notes/9.0/p... (quick example of the kind of heap size reduction applications could see)

https://maoni0.medium.com/dynamically-adapting-to-applicatio...


I did write ‘simple’, but obviously meant simpleR. A performant runtime will still require considerable complexity. Also, C# doesn’t underperform, I never said that — partially as the whole platform has access to lower level optimizations that avoid allocating in the first place, as you mention (but Span et alia does make the language considerably more complex than Java - which was my point).

But on the GC side it quite objectively has worse throughput than Java’s, one very basic data point would be the binary tree benchmark on benchmark games. This may or may not be a performance bottleneck in a given application, that’s besides the point. (As an additional data point, Swift is utterly bad on this benchmark finishing in 17sec, while java does in 2.59 and C# in 4.61), due to it having reference counting GC, which has way worse throughput than tracing GCs). But you are the one who already linked to this benchmark on this thread, so you do know it.


Do Go slices make it more complex? :)

Span<T> makes the language simpler from both the user and C# to IL bytecode point of view, all the complexity is in the runtime (well, not exactly anymore - there's ref T lifetime analysis now). On that note, Java does not seem to have a generic slice type, like ArraySegment<T> which predates spans. I can see it has ByteBuffer, CharBuffer, IntBuffer, AbstractEnterpriseIntProviderFactoryBuffer (/s), etc from NIO as well as sub-Lists(?) and using Streams in the style of LINQ's Skip+Take.

Spans are very easy to use, and advertising them as advanced type was a short-lived mistake at their inception. Since then, they have gotten adopted prominently throughout the ecosystem.

After all, it's quite literally just

  var text = "Hello, World!".AsSpan();
  var hello = text[..text.IndexOf(','));
  var twoChars = hello[..2];
And, to emphasize, they transparently work with stack buffers, arrays, unmanaged memory and anything in-between. You can even reference a single field from an object:

    var user = (Name: "John", DoB: new DateTime(1989, 1, 1));
    ProcessStrings(new(ref user.Name));

    // Roslyn emits an inline array struct, from which a span is constructed
    // It's like T... varargs in Java but guaranteed zero-cost
    // In C# 13, this gets support of params so many existing callsites
    // that used to be params T[] start accepting spans instead,
    // completely eliding allocations or even allowing the compiler
    // to reference baked into binary constant arrays
    ProcessStrings(["hello", "world"]);

    void ProcessStrings(Span<string> values) { /* ... */ }
On binary-trees - indeed, the results are interesting and Java demonstrates consistently lower CPU time cost to achieve similar or higher throughput (look at benchmark results distribution). It is a stress-test for allocation and collection throughput, yes. However, Java benchmarks also tend to consume consistently more memory even in allocatey scenarios: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

In any case, I have asked around for more data on detailed comparison of heap profiles between G1, Zgc and Parallel and will post them here if I get a response to provide more context. It's an interesting topic.


If your point of reference are Objective-C and Swift only, and you have not looked at how .NET's or Go's (which makes very different tradeoffs w.r.t. small memory footprint) GCs work, it might be interesting to re-examine prior assumptions in light of modern designs (I can't say Go is modern per se, but it is interesting nonetheless).

Also, .NET tends to heavily zero memory in general, as the spec dictates that fields, variables, arrays contents, etc. must be initialized to their default values before use (which is zero). Compiler can and will elide unneeded zeroing where it can see, but the point is that .NET's heaps should compress quite well (and this seems to be the case on M-series devices).


There are popular apps written in C# on the platform, but they're Unity games, which use il2cpp and I believe still use Boehm gc. I think this demonstrates a different point, since even a bad GC apparently doesn't stop them from shipping a mobile game… but it is a bad GC.

(Games typically don't care about power efficiency much, as long as the phone can keep up rendering speed anyway.)

> Also, .NET tends to heavily zero memory in general, as the spec dictates that fields, variables, arrays contents, etc. must be initialized to their default values before use (which is zero).

Same for most other languages, but there's a time difference between zeroing on free and zeroing on allocation. Of course, once you've freed everything on the page there are ways to zero the page without swapping it back in. (just tell the OS to zero it next time it reads it)


Yeah, Unity has terrible GC, even with incremental per-frame collection improvement. It's going to be interesting to look at the difference once they finish migration to CoreCLR.

If you'd like to look at a complex project, you can try Ryujinx: https://www.ryujinx.org. It even has native integration[0] with Apple Hypervisor to run certain games as-is on ARM64 Macs. There's also Metal back-end in the works.

Other than that, any new .NET application runs on MacOS provided they don't use platform-specific libraries (either something that uses Linux dependencies or kernel APIs or Windows ones). My daily drive device is an MBP.

A side-note is that on MacOS .NET does not use regions-based heaps yet and uses older segment-based ones. This has implications in terms of worse memory usage efficiency but nothing world-ending.

[0]: https://github.com/Ryujinx/Ryujinx/tree/73f985d27ca0c85f053e...


Man the term must have changed since I was in school; i thought garbage collection was a much more general concept than a specific tactic to achieve this end of automatic memory collection. Pity, it was a useful term.

It's worth noting many others also consider automatic reference counting to be a form of gc, albeit one with different strengths and weaknesses than stack- and heap-scanning varieties


Memory safe and, with Swift 6, data race safe.


there is Kotlin Native - "Kotlin/Native is a technology for compiling Kotlin code to native binaries which can run without a virtual machine."

https://kotlinlang.org/docs/native-overview.html


> Swift does not use virtual machine and garbage collection, it competes more to c++ and rust

Doesn't it use ARC by default?


Which is reference counting, not garbage collection. Ref counts free when count = 0. Garbage collection scans all object pointers and looks for loops / no missing pointers.


That's tracing garbage collection. Reference counting is another type of garbage collection. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...


Reference counting is not tracing garbage collection. To also quote a Wikipedia Link: „The main advantage of the reference counting over tracing garbage collection is that objects are reclaimed as soon as they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object.“

+ https://en.wikipedia.org/wiki/Reference_counting


> Reference counting is not tracing garbage collection.

???

They didn't say it was.


Of course reference counting is not tracing garbage collection. I never said it was. The comment I replied to claimed reference counting was not garbage collection at all and seemed to think tracing garbage collection was the only kind of garbage collection. Reference counting and tracing garbage collection are two different types of garbage collection.


Reference counting is a kind of garbage collection.


it does, I thought ARC is more performant than GC and has no stop-the-world issue,thus not a GC


Usually, it's the other way around: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

binary-trees is almost completely dominated by the time spent in allocator code, and stresses its throughput. This benchmark showcases how big of a gap is between manual per-thread arenas, then tracing generational multi-heap GCs, then ARC and more specialized designs like Go GC. Honorable mention goes to BEAM which also showcases excellent throughput by having process-level independent GCs, in this case resembling the behavior of .NET's and OpenJDK GC implementations.


A tree is indeed a bad fit for RC; so is anything else where you have multiple references to something but know there is a single real owner.

I'd suggest keeping strong references to all the tree nodes in an array, then having everything within the tree be unowned. Basically fake arena allocation.

Actually, the way it's written:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

is a common way you see toy data structures code written, but it's inefficient (because pointer chasing is slow) and there's better patterns. If you use the arena method above, you could use indexes into the arena. If not, intrusive data structures (where the references are inside Node instead of inside Tree) are better.


Pointer chasing is irrelevant here. It takes <= 15% of the execution time, and CPUs have gotten good at it. If it takes more - it speaks more about the quality of the allocator which has poor memory locality. As noted in my previous comment, it is dominated by the time spent in the allocator/GC code.

Please read https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The requirement to implement the same algorithm with the same data structure is what makes this benchmark interesting and informative. Don't tell me "allocate parent object A, allocate child objects C, D and E and assign them to A's fields, then allocate array F and G, and assign them to D's fields" isn't the bread and butter of all kinds of application logic, something that this benchmark stresses.


Some CPUs are good at it, but most aren't. (Apple's are.)

But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Also, I don't recommend intrusive structures merely because they'd be better for language games. I think they're better in general ;)


> But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Please read 'binary-trees' description and submission rules (#2). You are missing the point(er).


ARC is a variation of GC. Besides, a tracing GC doesn't have to stop the world at all.


It has nowhere near the performance characteristics of those languages. It could, but it doesn’t. Look up a variety of language benchmarks. It’s typically ranked around Python/Javascript. You can get as fast as C but the code is very atypical.


There's no way it's close to Python. Where are the benchmarks?

https://github.com/sh3244/swift-vs-python

Shows a huge difference, as expected for a typed memory-safe compiled language using LLVM versus an interpreted language with a global interpreter lock.



The thread you just posted has a bunch of posts indicating this was not the actually the same program in Python and Swift; further, the Swift version was written poorly. Plus, the graph in the final post shows whatever Swift version someone ran tests on as much faster than Python.


It is slower, like probably ~3x but lets not exaggerate that it ranks around python

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Edit: Kotlin is perfectly fine for _just_ web-servers, Vert.X is great. On systems programming, read below:

All JVM languages are not viable by definition for this domain. Object oriented and heavily abstracted nature of the underlying runtime implementations prevents their effective usage in systems programming due to lack of fast FFI, structs, particularly so of custom layout, and the historical aversion of the ecosystem to low-level features.

Kotlin native does not count because presently it has 0.1-0.001x performance of OpenJDK, it is that bad, and I assume is subject to the common subset of features that must also be expressible with JVM.

.NET, especially with compilation to native statically linked binaries (NativeAOT) is an option, and I believe, due to ecosystem maturity as well as very heavy focus on performance in all recent .NET versions as well as continued improvement of low-level features (portable SIMD, byref pointers with simple lifetime analysis, static linking with C/C++/Rust/etc.), it is a strong contender. Unlike Java, C# has great systems programming story, after all, it was influenced as much by C++ as it was by Java, sadly many only ever think about the latter.

However, I'm looking forward to Swift 6. Once it is out, I'd love to see it offer more opportunities at ensuring static dispatch and generic monomorphization (in .NET, generics with struct arguments are always monomorphized like in Rust, so you have tools for zero-cost abstractions) and happy paths allowing to bypass prohibitive cost of ARC with new annotations. By using LLVM, Swift has theoretically great performance ceiling, even if it does not deliver on it just yet, losing to C# by a good margin on the more complicated code due to ARC and dynamic dispatch. But because Apple seems to be invested in using it for tasks that will require addressing these shortcomings, it is pretty exciting to see where they will take it.


Isn't Kotlin based on JVM and Swift is natively compiled? That's a pretty significant difference and I'm not aware of any "to native" compiler for Kotlin like the NativeAOT approach exists for .NET...



There are in fact two "to native" compilers for Kotlin, the one for Kotlin only is called Kotlin Native but you can also use graalvm native-image to compile any JVM language to native.


How hard is it to build an HTTP server?

(Yeah, this is a dumb question but I'm asking anyway)


Do you mean from first-principles, or letting someone else do the work and use a framework?

Apple did the hard work, https://github.com/apple/swift-nio.

If you just want a big framework to launch an API, that's been around for years: https://vapor.codes


Sneakily hard, actually. There's different versions of HTTP (of course), so pick your target. But when you hit HTTP/2.0, it's not a simple request/reply model (if HTTP/1.1 can be described as such). The intermixing of various client headers and what they imply to server behavior, handling of the streams and when to open vs. when to close, http/2 multiplexing, etc. Don't forget HTTP/3 which uses the QUIC protocol (UDP based) instead of TCP.

Interestingly though, a trivial HTTP server is actually very easy to implement as well. A very crude HTTP/1.0 server (or maybe even a limited scope HTTP/1.1 server) can actually make for a fun afternoon project. Like minimal (or no) concurrency support, all TCP connections closed after the request/response cycle, GET only (or maybe POST only), etc.

So it's a mixed bag of what you want and how you define an HTTP server.


I can't think of a good reason to want to implement the complex parts. Write an HTTP service if you must, but make it HTTP/1.0 (or 1.1 for keepalive) and stick it behind nginx to do the newer versions and SSL termination.

(I also think all HTTP services should if possible be written in a shared-nothing CGI type language, and not embedded in your random stateful program. This way you can't accidentally leak info across user sessions.)


Both of these are great points. I do really appreciate an nginx (or other load balancer) front end. Or even cloudflare or whatever AWS/Azure offers. A simple horizontally scalable HTTP/1.1 backend with a reverse-proxy that can uplift your app is a great strategy.

Also, your comment about "shared-nothing" is interesting too. It surely doesn't hurt to think about it in this way, but likewise, might be out of scope for a simple web server concept (for example, if you're not even really supporting sessions at all).


That all depends on your requirements. A few hundred lines of code will get you pretty far, but there are about 100 optional features in common use.


I’ve migrated to swift for some nontrivial projects that were formerly C++. Quite happy so far, and didn’t find rust nearly as pleasant when I tried the same thing there. I don’t want explicit memory management very often, so ARC works great for me. Haven’t had any issues with portability either, although I don’t test often on windows so I’m not confident there but Linux and Mac have been great.


I'm glad you found something you like. I just want to make it clear that the things about Rust that make it "unfriendly" are also the things that make it able to do things other languages can't do, like compile-time memory safety. Depending on what you are making, that might make little difference. I just wanted to make sure you appreciated what Rust can do that other languages can't.


This is an article about another language that can do that.



Swift does have borrow checking: https://www.swift.org/blog/swift-5-exclusivity/

Basically the difference is that Swift's is more implicit, happens more at runtime, and it will make some programs work via copy-on-write that Rust would reject.

So that's obviously more limiting. It's more flexible when you can allocate memory freely, but it doesn't work if you can't.


> happens more at runtime

Bingo, that's the difference. That's why I said "compile-time memory safety". This is what Rust gives you for your trouble, zero (runtime) cost for memory safety.


Curious! In what ways do you do you see swift as friendlier than Rust? I perceived it as functionally equivalent, although Swift had a lot more "magic" involved and it was less clear how things might work memory-wise.


To me, Swift has better ergonomics for most people.

Ref counting by default makes most programs easier to deal with.

Guard let (though recently somewhat introduced in rust) are much more friendly ways to unwrap optionals.

Being able to provide parameter defaults and aliases.

Easy passing in of callbacks. Easier async.

Null chains with the Question mark operator.

I really like rust, but it’s much faster to get to a working program in Swift.

And with the new CXX interop, I now reach for Swift by default since I have to deal with a lot of C++ libraries.


> Easier async.

I was on board until this one. Async is a rough spot for rust, but I find the async strategy swift went with absolutely baffling and difficult to reason about.


I’m curious, what puts you off of them? Actors are pretty standard ways to do it, and I feel like most of the successful rust implementations are actor based as well.


More magic (thus less required explicitness) and less involvement with memory management are typically considered as friendly traits in programming languages.


> More magic (thus less required explicitness) and less involvement with memory management are typically considered as friendly traits in programming languages.

Really depends on the context. I really, really, really hated this instinct in the ruby on rails community when I was still doing that. Magic is nice until it doesn't work the way you expect, which is when it becomes an active liability.

I really don't spend much time thinking about memory management in Rust, but I can certainly understand why one might be happy to not have to design around ownership and lifetimes. I really like the explicit nature of it, though, makes it super easy to read and reason about code you've never seen before.


I only want one feature from Swift: faster type-checking. If you write a few lines of math, or a moderately complex SwiftUI expression, suddenly the typechecker starts timing out. Swift is the only language I've ever used with this problem.

https://hachyderm.io/@evanw/109859384302551859

https://www.cocoawithlove.com/blog/2016/07/12/type-checker-i...

https://danielchasehooper.com/posts/why-swift-is-slow/

This isn't mentioned at all in the announcement so I'm kind of disappointed.


Chris Lattner recently explained that because of function overloads (and other features), type inference is really hard to make fast in Swift, and is why he says that "using Hindley-Milner was a mistake". [0]

[0]: https://youtu.be/ENviIxDTmUA?si=FavjcK8IQygnlIwT&t=4417


I don’t think Swift uses Hindley-Milner though?


It's a fair point, but radically different 8 years on.

Swift type inference is two-way, so you can often improve things by annotating types instead of using inference.

Also, it very easy to have complex builders and nested closures that complicate inferencing, so it's a bit of a side-effect of how much people use the powerful features.

The big compile-time gotcha now is using macros. To be safe, each macro runs in its own process hogging cores, and the new Xcode compile scheduler isn't quite sensitive to that, so sometimes command-line builds with less parallelism are a fair bit faster. The solution there is judicious use, particularly of libraries depending heavily on macros.


I can only imagine how terrible things must have been if the current state is radically improved. I can lock up xcode by doing nothing more than backspacing over a few characters.


I wouldn't expect it, ever, frankly. Seen multiple Apple people come and go while explaining how they were going to solve that (c.f. 2016(!) for link #2)


I'm incredibly excited for this. I thought swift was basically going to be stuck on macOS.

Last time I converted the swift compiler from the Ubuntu package to work on Debian, stuff was looking really awry. Most things work but not simple things like sigterm signals.

Swift is a fantastic language. I think the most advanced and smart language today. And I say this having used over 20 professionally over 25 years.

Just look at how swiftUI is implemented. It's not even a DSL, it's Swift naturally! Compare it to flutter and you'll see how incredible it is. (I do like dart too though)

As for the language it's full of clever features and advanced ideas that don't suck to use and consider the developer real world use of the language.

Two things really suck in swift though; compiler error messages are straight out of the university of assholery and documentation was crafted in Mordor probably.

Of course most libraries probably won't work well on Linux yet but there is a future with the right balance between safety and speed and joy of developing.


I recently started learning Swift and Swift UI and was surprised at how complicated the language is. Especially regarding reactive instance variables. eg. @observableObject. Didn't understand it. There are like five different ways to do it. Ended up playing whack a mole until it worked.


TBF, reactivity in UI is still basically an unsolved problem with frameworks going in circles between data-binding, dependency-tracking, memoization & compilation.

SwiftUI initially promoted their ReactiveX alternative.


If you’re targeting newer OSs you can try the @Observable macro instead of ObservableObject. It fixes a lot of the weird problems with the latter (although does introduce some new, weird edge cases).


I had the same experience until I worked with swiftdata which was rather nice, especially by comparison.


Swift needs to figure out what it wants to do and stick to it. Too many syntactic sugar and too many half baked concepts pushed through the door.


+1


It's super nice that they support moving up incrementally.

Moving to Swift-6 mode with full data-race safety checks can be daunting. They wrote a separate post on that, and Holly telegraphed that they're still actively reducing noise, i.e., warnings where the compiler could use a bit more analysis to prove there's no race.

The really nice thing is you can use the new tooling, but stay with the 5.10 version of the language that works for your code, and feature-by-feature add Swift 6 checking. You can build the same package under both language modes, so libraries can move ahead in their versioning while still supporting clients who are not ready.


I’m sort of in two minds. On paper, I think for greenfield projects data-race safety checking looks great. If I switch to thinking about my day job, I know we will likely never adopt Swift 6 for some of our components because they’d need to basically be rewritten, at which point we’d probably consider something more cross platform like Rust or Kotlin. So despite liking the feature on paper, I feel like the overhead it introduces in practice is kind of pushing me away from the language.

It’s hard to say at this point though, adoption might get a lot easier with subsequent point releases.


The whole concurrency agenda with local reasoning sounds great in theory but in practice becomes such a pain in the ass for projects that has existed for years.

Maybe our current app has unknown data race bugs, maybe not... with a crash free session percentage of 99.80% and hundreds of thousand monthly users, it's not something I consider a big enough problem, to the point where more friction to the language should be added to maybe fix it.


This is pretty much the conclusion we also end up at, data race issues aren't our main issue right now, although zero would be a nice to have. Everytime I've tried out Swift 6 language mode I also feel like I'm sometimes appeasing or tricking the compiler rather than thinking about actual problems.


The download sizes are quite large. 775 MB for swift-6.0-RELEASE-ubuntu22.04.tar.gz ~500MB for windows.

Is this shipping an entire copy of LLVM? What could possibly make this so large?


Yeah, they contain binaries for Swift itself (the driver interface, the compiler, the repl, the package manager, swift-format, etc.), as well as the toolchain's dependencies (llvm, lldb, clang, the sanitizer runtimes, etc.).

There are other (also large) downloads to enable producing static binaries on Linux, and those contain the required system libraries (musl libc, libicu, libcrypto, libpthread, etc.). Those are about twice as big as they need to be, because they bundle x86 and aarch64 together.


Cross-target toolchain support for shipping for all platforms from any platform? (Not sure that's in there, but if it were, it'd be large.)


I wish they would stop introducing more magic syntaxes.


I agree! I'm a Go programmer, and while I do wish it had some more features at times, Swift is an example of how it can easily go out of control and ruin a promising language.

For example tests, there's so much magic. How do I know it runs the test for each item in the arguments array? What if there were multiple arguments? After using Go for close to a decade now, I'm really seeing the wisdom of avoiding magic, and making your testing code the same language as your building code! Compare:

Swift:

    @Test("Continents mentioned in videos", arguments: [
      "A Beach",
      "By the Lake",
      "Camping in the Woods"
    ])
    func mentionedContinents(videoName: String) async throws {
      let videoLibrary = try await VideoLibrary()
      let video = try #require(await videoLibrary.video(named: videoName))
      #expect(video.mentionedContinents.count <= 3)
    }
Go:

    func TestMentionedContinents(t *testing.T) {
      tests := []struct{ Name string }{
        {"A Beach"},
        {"By the Lake"},
        {"Camping in the Woods"},
      }
      for _, tt := range tests {
        video, err := library.FindVideoByName(tt.Name)
        if err != nil {
          t.Fatalf("failed to get video: %v", err)
        }
        if len(video.MentionedContinents) > 3 {
          t.Errorf("video %q mentions more than 3 continents", tt.Name)
        }
      }
    }


Go with timeout handling in case the FindVideo function takes too long (idk Swift magic well enough to know if it'd do this automatically!)

    func TestMentionedContinents(t *testing.T) {
      tests := []struct{ Name string }{
        {"A Beach"},
        {"By the Lake"},
        {"Camping in the Woods"},
      }
      for _, tt := range tests {
        t.Run(tt.Name, func(t *testing.T) {
          ctx, cancel := context.WithTimeout(context.Background(), 30*time.Millisecond)
          defer cancel()

          video, err := library.FindVideoByName(ctx, tt.Name)
          if err != nil {
            t.Fatalf("failed to get video: %v", err)
          }
          if len(video.MentionedContinents) > 3 {
            t.Errorf("video %q mentions more than 3 continents", tt.Name)
          }
        })
      }
    }


> How do I know it runs the test for each item in the arguments array?

At the risk of coming across a bit rudely: this feels analogous to asking “how do I know `for _, tt := range tests` loops over every element in the array?” Both are language/syntactic constructs you have to learn.


Maybe I'm being a bit harsh myself, but with the Go code, it's the same syntax that I use whenever I would write a for loop anywhere else in my production codebase. It's not something special to testing, it's literally _just code_.

However, I do like Swift, I in fact single-handledy wrote an entire iPhone app used by 10s of thousands of people on it and there were a lot of wonderful things, like nullability being "solved", and smart enums etc. This isn't a language war, I like them both, and could point out flaws in either just as easily.

I just feel like Swift has a bit too low of a bar for adding new features, which leads to a lot of nice things, but also a lot of functionality bloat. I can look at Go written 10 years ago and it'll largely be the same as how it'd be written today; with Swift, it's night and day. I built the aforementioned app in 2014 to 2017 (mostly the first year), and there's so much I don't recognize.

I think one of the things that bothered me the most where I feel they sort of "jumped the shark" is with ViewBuilders. It looks like code, acts like it 99% of the time, but it isn't. Does that `var body: some View { ... }` return a view anywhere? No, and it'll break if you try! It's a whole different concept to admittedly offer a very nice experience emnulating React with idempotent views.

But still, it's awfully strange that this works:

    struct IntroView: View {
      @State private var text = "Yes"
      var body: some View {
        VStack {
          Text(text)
          Button("Toggle") {
            text = text == "Yes" ? "No" : "Yes"
          }
        }
      }
    }

But this does not, because it's not _actually_ code.

    struct IntroView: View {
      @State private var text = "Yes"
      var body: some View {
        VStack {
          Text(text)
          Button("Toggle") {
            text = text == "Yes" ? "No" : "Yes"
          }
          print("debugging line")
        }
      }
    }


Except it is. var body is a variable declaration. some View {…} is nested closures with a single statement each so the return keyword is implied. Just add return in your second example


SwiftUI's ViewBuilder isn't part of the Swift language. It's a macro implemented in Swift.

Now whether macros themselves were a good idea or not, that's an entirely different question.


Right. But it strikes me as one of those things that really helps new developers become productive quickly -- to then, it looks just like code, they probably don't realize it's not code.

But once your app hits a certain size, the abstraction will inevitably leak somewhere, and now you'll need to step back, and learn all about macros and viewbuilders to be able to fix your issues.

Probably worth it! They enable such a wonderful way of drawing a UI. But it's not a zero-cost abstraction, and I like that Go has eschewed _anything_ like that at all. There's no IEnumerable with yield return that compiles into a state matchine. No ViewBuilder that's a macro, not code. There's no Ruby-like adding methods at runtime (Model.find_by_bar_code???). It's all just plain code right in front of you.

They both have their strengths. I think Go made the right trade-off for simple high-throughput microservices, as they're trivial to understand and will never surprise you. And Swift made the right tradeoffs for a UI—-it would be painful to write it in Go.

My reaction as someone who used Swift extensively in 2015, quit, and now came back in 2024 is "wow, did they add too many features? that's a lot of quirks to learn." FWIW I don't feel the same way about TypeScript or C#.

It just feels at first glance like Swift let every feature in the door. And now we've got an awfully complex and laggy compiler in my experience, especially when you run into one of the type inferencing pitfalls leading to an exponential algorithm. Go explicitly made fast compilation a design goal and lost a lot of features to it.


It is code though. It’s a bunch of nested closures with an implicit return


Result builders aren't macros.


@Test, #require and #expect are just macros. You can expand them if you want to see what they do (or just look at the swift-testing code itself).

Perhaps I'm just used to Python unit testing with similar decorators. Presumably, if you need to pass in two arguments, you'd either pass arguments: an array of tuples or a tuple of arrays for combinatorial testing.


> How do I know it runs the test for each item in the arguments array

I mean the APIs aren’t magic; you can “inspect macro” to see what code is generated at compile time which boils down to something similar to the Go code with better ergonomics.


I don't know if I'd agree better ergonomics in this case, since you lose a lot. What if you wanted to load your test cases from a CSV? e.g. you had a two column CSV with 1,000 words, first column mixed case, second case lowercase. They're mixed languages, with unicode oddities sprinkled in. So it really is worth testing against such a corpus. In Go, I could simply open a csv checked into the codebase and use it for my test cases. I'm sure it's possible, but you probably have to break way from the macro (which I argue doesn't add anything) and take a completely different approach. In Go, it's JUST CODE.

Again, I really like Swift (besides xcode performance at times… ugh!). It's possible to find flaws in both languages yes still like them. Swift knocks Go out of the water in so many ways. But I'm scarred from ruby on rails magic growing and growing until you had to be an expert in the magic to write code, when the point was for the magic to make it easier.


Is there a specific one you’re objecting to in 6?


Since you asked:

The provided `drinkable` example I think is pretty bad and it's very surprising to me that this is a headline feature.

  protocol Drinkable: ~Copyable {
    consuming func use()
  }

  struct Coffee: Drinkable, ~Copyable { /\* ... */ }
  struct Water: Drinkable { /* ... \*/ }

  func drink(item: consuming some Drinkable & ~Copyable) {
    item.use()
  }

  drink(item: Coffee())
  drink(item: Water())

Here we have a drink() function that either accepts something `Copyable` OR non-Copyable (uh, I mean, `~Copyable`) and either consumes it...or doesn't? That seems to me like a fountain for logic errors if the function behaves completely differently with the same signature (which is, in fact, explicitly labeled `consuming`). It seems like it should not just compile if you try to call this with a `Copyable` type like Water, but it does.

The syntax for representing this complex and weird concept of "maybe consuming" being `consuming some Drinkable & ~Copyable` is also just totally gross. Why are we using bitwise operation syntax for some weird and logically incoherent kludge? We cannot apply these & and ~ operators indiscriminately, and they do not mean the same thing that they logically mean in any other context, but this function definition definitely implies that they do.


The issue is that `~Copyable` is basically an anti-protocol.

With a generics definition like `some Drinkable` you are _restricting_ the set of suitable types (from any type to only the ones implementing `Drinkable`) which then _expands_ the available functionality (the method use() becomes available). From the perspective of type `Water`, it's conformance to `Drinkable` expands the functionality.

The language designers then get in a pickle if some functionality was assumed to exist for all types (e.g. `Copyable`)! By "conforming" to `~Copyable` you are _removing_ functionality. The type can NOT be copied which was assumed to be universally true before. Now, a generics definition like `some ~Copyable` actually _expands_ the set of suitable types (because Copyable types can be used as if they were non-copyable) and reduces the available functionality. It's the inverse of a regular protocol!

It becomes extra confusing if you combine `some Drinkable & ~Copyable` where `Drinkable` and `~Copyable` work in opposite directions.

This problem also exists in Rust. `Sized` is a trait (aka protocol) that basically all normal types implement, but you can opt-out by declaring `!Sized`. Then, if you actually want to include all types in your generics, you need to write `?Sized` (read: Maybe-Sized).


Here’s my take. I haven’t used this feature yet so I haven’t dug in too deep.

drink() takes a Drinkable. A Drinkables can be non-copyable.

Copyable is the default, so it has to mark itself as accepting non-copyables.

Coffee is non-copyable. Water doesn’t say which means it’s copyable (the default).

You can use a copyable anywhere you’d use a non-copyable since there is no restriction. So since drink can take non-copyables it can also use copyables.

I’m guessing the function definition has to list non-copyable otherwise it would only allow copyable drinks since the default is all variables are copyable.

“consuming some” means the function takes over the ownership of the non-copyable value. It’s no longer usable in the scope that calls drink().

For the copyable value I’m not sure but since they can be copied I could see that going either way.

On syntax:

Yeah it’s a bit weird, but there was a big debate about it. They wanted something easy to read and fast to use. NotCopyable<Drinkable> is really clear but typing it over and over would get real old.

~ is not special syntax. My understand is “~Copyable” is the name of the type. You can’t just put ~ in front of anything, like ~Drinkable. But since that’s the syntax used for bitwise negation it’s pretty guessable.

& is existing syntax for multiple type assertions. You can see the evolution in this Stack Overflow answer:

https://stackoverflow.com/a/24089278

Seems to read like C to me. It has to be Drinkable and not Copyable.

Like I said I haven’t gotten to use this yet, but it seems like a nice improvement. And I know it’s a step in the path towards making it easier to do safe asynchronous programming, object lifetimes, and other features.


Yeah after thinking about it a bit more it does make more sense to me. The primary gap I had was, as you allude here:

> You can use a copyable anywhere you’d use a non-copyable since there is no restriction.

Effectively copyable always conforms to non-copyable, just not the other way around.

And the compiler effectively automatically notates literally everything with Copyable, so you need the explicit (& ~Copyable) in the function definition so you're still able to define functions within a ~Copyable protocol that have Copyable semantics.

It's very in the weeds, and I still don't like it (I would have preferred NotCopyable since the ~, especially next to the &, directly implies something like bitwise operators), but I guess custom ownership is itself very in the weeds and you will have to think about it hard no matter what approach is taken. I would have expected custom ownership to be fundamentally incompatible with Swift, but clearly it's here; I should probably read about it more so I have a more clear understanding.

(I also didn't realize & was extant syntax).


Yeah, I’ll admit it’s hard to get your head around. I had to think about it a couple of times just writing that explanation.

It took me a couple of minutes to figure out why it was in the function definition. I guess it had to be but that wasn’t obvious to me at all at first.

> And the compiler effectively automatically notates literally everything

Right. Just like how all classes in Java extend Object even though you don’t have to literally write it.

I believe they’re still working on a more Rust-like borrowing system, but I could be wrong. I know this helped them implement parts of the standard library much better because they could make assumptions that you can’t make with copyable objects.

I do get your point about just calling it NotCopyable. I don’t actually know why they settled on the name they did, I didn’t ever try to look that up. Maybe it’s because it’s a special thing that requires compiler machinery and there’s no way for you to make an equivalent?


Every time I read about swift it seems like a nice language, but it's a shame its ecosystem is so limited. It's great to see it supporting more platforms now. Is the ecosystem in the package manager/index also heading the same direction?


For the question, I think it depends on what you mean. I've been able to use SwiftPM on linux without a whole lot of issue. So SwiftPM itself doesn't seem to be the blocker here. At least I've not had issue in my experimentation. What is maybe more of the blocker is the number of libraries that are useful and are compiled and tested on Linux. Which for that, the solution is just people write libraries/upstream Linux support to existing libraries.


Making more and more of foundation available (a big part of 6) should also help a lot. Fewer external dependencies needed, and fewer per-platform libraries if you support multiple platforms.


Yes it should. I am excited for it, the big of swift on Linux that I've done, I have enjoyed and look forward to it getting better and better.


Swift is over-engineered and they don't care about compatibility between versions and just add things even if we can live without new features. It will become huge hard to use and properly understand language and will die.

Flutter, KMP and maybe even ReactNative will become much simpler and better choices and native on iOS will slowly but surely lose developers in my opinion.


I recently tried SwiftGodot, because I found Swift for game Dev very interesting in comparison to csharps GC stalls. Sadly it does not yet support sharing pre build Windows development libraries[1]. Compilation does take quite some time, even on my beefy 16 core Ryzen. The changelog did mention multi threaded Swift package manager improvements on Windows though.

[1] https://github.com/migueldeicaza/SwiftGodot/issues/521


I've been using SwiftGodot lately too, and I'm quite enjoying it! iOS development is my day job, so it was great to port my old GMS2 project's codebase over to Swift, and suddenly have type-safety and no GC hiccups. During the porting process, the Swift compiler even discovered a couple of latent bugs for me!


I'm gonna to say it: the Swift language had become a monstrosity.


Yup I agree, to me it is turning into the next c++ with the kitchen sink in terms of features. The language is trying too hard to cater to too many use cases.

At some point I hope it stabilizes without it being the end of the language.

I use Swift every day and while I still enjoy it, I think the language was nicer to use in its earlier iterations than it is now, which is a shame.


I was hoping for the same about 2 releases ago. We're way beyond unsalvageable.

If I was Chris Lattner, I would be lying in my bed wide awake each night, staring at the ceiling, wondering how it went so wrong.


Explained to a layman, how nice is this release? Is more platforms support gonna make Swift a first class langage?


The biggest feature here in that regard is the static, Linux SDK and cross platform foundation library


I'm not familiar with Swift, anyone already know how typed throws hold up? Checked exceptions are pretty universally seen as a mistake in Java [1] while ADT à la Result are generally perceived better.

[1] See e.g. https://literatejava.com/exceptions/checked-exceptions-javas..., but also Java 8+ API's moving away from them.


You can read about the trade-offs in the language proposal here: https://github.com/swiftlang/swift-evolution/blob/main/propo...

In particular:

> Even with the introduction of typed throws into Swift, the existing (untyped) throws remains the better default error-handling mechanism for most Swift code. The section "When to use typed throws" describes the circumstances in which typed throws should be used.


I think your link got cut off. Here’s the direct link to the section on when to use typed throws. I hadn’t read this before, and it changes how I’ll approach them. Thanks for pointing it out!

https://github.com/swiftlang/swift-evolution/blob/main/propo...


Swift throws are results, but now with specific and generic and specializable types.

To illustrate, Swift has a nice `rethrows` feature that helps with function composition.

If your function takes a function parameter that can throw, it can use 'rethrows' to say "I only throw if this parameter does". Then when passed a function that doesn't throw, your function need not be invoked with `try`.

This plays nicely with generics over throwing type, since the bounds propagate back to the function type. If the function parameter only throws a given type, that's what the function will throw.

Also helpful for reducing error boilerplate, `try` has the convenience form `try?` which means "just swallow the exception and return nil", and applies to a whole chain: `let f = try? this() ?? that() ?? "skip"` means f will be the result of either (throwing) function or a literal otherwise.


Throws in Swift are not traditional exceptions. A throwing function in Swift is actually a function that can return an error instead of a result. This is hidden by the language.

So something like

`func foo() throws -> Result`

Is actually

`func foo() -> Result | Error`

The compiler also forces you to handle any returned errors using `try`. So to call our example `foo`, you'd do:

`let result = try foo()`

You must either handle any throws error or include this call in an enclosing throwing function.


>Throws in Swift are not traditional exceptions. A throwing function in Swift is actually a function that can return an error instead of a result. This is hidden by the language.

Implementation detail.

The two features are equivalent.


Using the normal return path vs. nonlocal returns is, I think, not equivalent unless you have a GC. Otherwise you need all that "exception safe code" stuff.

But the main difference is it's not hidden by the language; you have to 'try' any method that can return an error. Straight-line control flow in an imperative language is a good thing IMO. …but too bad about those defer statements.


>Using the normal return path vs. nonlocal returns is, I think, not equivalent unless you have a GC.

I will repeat, whether exceptions are implemented as "nonlocal returns" (like setjmp/longjmp with stack unwinding) or as syntax sugar with sum return types, is completely irrelevant; an implementation detail. The generated machine code is different, but the behavior, the user experience is exactly the same.

>Otherwise you need all that "exception safe code" stuff.

In both cases, you need to write "exception safe code". Example of unsafe code in Java (a language that implements exceptions as non-local returns):

  void bar() throws Exception { ... }

  void foo() throws Exception {
    mutex.lock();
    bar();
    mutex.unlock();
  }
Example of unsafe code in Swift (a language that transforms errors into sum return types):

  func bar() throws { ... }

  func foo() throws {
    mutex.lock()
    try bar()
    mutex.unlock()
  }
>But the main difference is it's not hidden by the language; you have to 'try' any method that can return an error. Straight-line control flow in an imperative language is a good thing IMO. …but too bad about those defer statements.

Whether the language makes you prefix throwing calls with "try" is completely orthogonal to how they're implemented (nonlocal return vs sum return type). It's just a matter of syntax.


Checked exceptions are in no way universally seen as a mistake. I thought, and still think, that they are a great feature of Java.


I never understood why checked exceptions “are bad”, but Result types “are good”. To me they feel like conceptually the same idea.


The real answer IMO is how they (don't) integrate with generics, first-class functions, and the type system in general. If you try using a checked function inside a stream() call you'll know exactly what I mean. Yes, it's technically possible to make a "functional interface" with a fixed number of checked exceptions, but in practice it's a huge PITA and most functions just don't allow checked exceptions at all.

Compare to ATDs, where any generic function taking a T can also be used with Result<T> exactly the same. (Not a perfect comparison, but there are lots of other scenarios where ATDs just compose better)


It’s often just the boilerplate. As a Java dev people often end up ignoring it and changing everything to “throws Exception” to avoid having to list 4 exception types all over a call stack.

Or they catch and wrap everything in some CustomSysException type so they only have to list one type and it’s not Exception, but then that’s really the same thing isn’t it?

I think it’s kind of a combination of not dealing with things and throwing them up the stack combined with too many exception types and maybe using exceptions when a result type would just be easier.


It's useful to have errors which are less typed than your "actual" results, because if you write a library then you don't want every possible error to be part of your API. If you do, and you call some other library, then either you have to swallow its errors or commit to having all of its error APIs be part of yours too.

And in the end many errors can't be handled automatically, the best you can do is just show them to the user.


Checked exceptions are universally seen as a mistake in Java*

* - according to a couple of loudmouths on the internet


And the designers and the Java stdlib itself, seeing as it's pretty much all runtime exceptions in everything introduced in Java 8 and later

Also JVM guest languages like Kotlin and Scala treat all exceptions like runtime.


Checked exceptions with lambdas are a nightmare from what I remember.


Used typed throws for a bit and I like them. I remember from long ago the Java implementation forcing me to catch them but in Swift it’s just a bit of extra compiler-enforced documentation about what and what not to expect in terms of errors.


> I remember from long ago the Java implementation forcing me to catch themm

“throws Exception” on your method.


*on all methods

And then in main:

    try {
      // whatever
    } catch (Exception e) {
      System.out.println("Oops!");
    }


Pretty universally seen as a mistake by those people who see them as a mistake. :)


Mostly by Java haters, that even miss the point checked exceptions appeared first in CLU, Modula-3 and C++ before Java was even an idea.

Forced checks on sum types with automatic unwinding are another way of doing checked exceptions, that apparently those haters love so much.


CTRL-F "no longer takes 37000 years to typecheck a simple expression".

"0 matches".

Maybe on the next one.


I really love the Swift language. I think it is really nice to program in. When it is really good, it just is so satisfying. I can get that "ahh" feeling. I feel like it's this weird blend of Ruby, Haskell, Javascript, and some C# influence. And of course there is always Objective-C baggage lurking in the background.

I agree with most of the comments on this thread. The governance of the language really is a problem. You really can't run two models on the same project at once. The incentives just always get screwed up. Better to just be honest about what's going on, and then everybody can work together. But sometimes that means that people need to start getting paid, and companies love to not pay money if they can avoid it.

Also, Apple's tooling is just truly horrendous. It is just awful. I don't understand why they won't do anything about it. Xcode, there are some things that are nice about it, but compare it to really using a tool like IntelliJ IDEA well and Xcode just fails so hard.

And certain UX problems have literally persisted for a decade, and Apple has not done anything about it. It really is kind of unconscionable that Apple does this to their developers. I know on some level I am being ungrateful, because there are mountains being moved behind the scenes by Xcode, and I really do appreciate that. And from watching WWDC talks, I know that there are some really good engineers at Apple that are trying to do the right thing. And some of it is just corporate BS that is unavoidable. But they really do need to get better.

In any case, I hope that this update makes everybody's life better. When it is really nice, I think that Swift has some of the best developer ergonomics out there. That's just one person's opinion, of course.


I’m honestly not usually that interested in feature lists for languages I don’t use, but this really does look quite compelling. Ownership semantics and C++ integration? Sign me up.

I’d be interested to know how good this integration is in practice e.g. could it be used to integrate with Godot directly?


Something like this? https://github.com/migueldeicaza/SwiftGodot

> SwiftGodot can be used to either build an extension that can be added to an existing Godot project, where your code is providing services to the game engine, or it can be used as an API with SwiftGodotKit which embeds Godot as an application that is driven directly from Swift.

Miguel de Icaza gave a great talk on Swift + Godot (among other things) https://www.youtube.com/watch?v=tzt36EGKEZo


I guess my question is how much a custom bridge is needed with the C++ integration. (I understand the bridge will always be better in some sense, but I’m also interested in what it would be like to just ignore it and target the C++ API directly.


Seconding this! I've ported a GMS2 project over to SwiftGodot, and it's been great to work with. As I ported over the code, the Swift compiler even caught a bug or two that I'd missed back when it was written in GML!


I've been mixing Swift and C++ in an app and it works pretty much as advertised.


> I’d be interested to know how good this integration is in practice

I haven’t used it, but from looking at the long) video at https://youtu.be/ZQc9-seU-5k?si=JMFCWKUZ0vVtst9K (discussed in https://news.ycombinator.com/item?id=38444876), it looks pretty good.


I'm interested in seeing where Swift Testing goes.


Swift still lacks some concurrency features, even with Swift 6, but it's nice to see that atomic operations have finally been added. Besides that, I often encounter an error without knowing why: 'Failed to produce diagnostic for expression; please submit a bug report.'


What platform has 128-bit integer support to where that was an important thing to add to the primitive types?


Many platforms have an instruction to get the upper 64-bit word of a widening 64×64-bit multiplication. But it can be difficult to (performantly) access such instructions from anything higher-level than assembly, so well-optimized language support for 128-bit integers can open this up.


Swift had made this available for a long time via multipliedFullWidth (SE-0104, implemented in Swift 4.0), no Int128 required. E.g.: https://godbolt.org/z/dvMefahdq


IPv6 and ZFS use 128-bit integers - I suspect this is why this primitive was added.


(Author and implementer of the Swift Int128 proposal here)

There's lots of one-off uses for 128-bit integers scattered around systems programming (and programming in general, I just happen to come from more of a systems background). Individually using a pair of 64b integers for each case is totally workable, but eventually life is easier if you just have an Int128 type. There's not really any one use case that drove their inclusion in the standard library, rather lots of times that we were saying "it would be kind of nice if we had this".


Thanks for your work!


As a side note, zig supports arbitrary width integers. I'd like more languages to support that. You can have i7, i3, i1024 or whatever you want (I guess up to some limit).

Makes it a lot easier to handle packed binary files etc.


Maybe something to do with AES 128bit blocks


Swift is really cool but has really sharp edges for a primary lang for such a huge ecosystem


I find this bid for cross-platform recognition somewhat schizoid given that Swift isn't even backwards compatible on Mac OS. I has to upgrade to Ventura a while back just to get Swift regexen working.


This is somewhere that Swift is better on non-Apple platforms. You have to upgrade macOS to get Swift bug fixes because the runtime is part of the OS, while on other platforms it’s a separate thing that can be upgraded on its own.


Maybe they should get their own house in order first.


Swift is great but I wish it had namespaces / packages...


Swift has packages and a module system.


I think the complaint is the global runtime namespace, not source modules. Statics live forever, and extensions in any library on a type apply to all uses in the runtime (with no guarantees about conflicts).

Mostly that's minimizable with good practice, but can be annoying to those used to hierarchical namespacing and static memory reclaimed after the class becomes unreachable.


What I meant is that if you have a larger project with hundreds of files, every top-level component in every file is globally available, which is a problem.


No submodules however.


Programmers can't be trusted with submodules, as you can see from C#/Java and its standard libraries where everything is named like System.Standard.Collections.Arrays.ArrayList.

Of course, taking them away doesn't stop them from other kinds of over-organization like ArrayFactoryFactoryStrategies, but it helps a little.


"Programmers can't be trusted with..." isn't the best argument here IMO. You already gave one reason why. Programmers will create a mess regardless IMO, despite how nice the language is. Adding to that (1) among all he things I didn't like about Java, nested modules were least of it. (2) Lot of it has to do with how reference code in that ecosystem are written which are then adapted as standard practice. Its all good for stuff that do one thing, but if you are building things in same repo with several components, they are nice to have. Rust/Python/Racket are few languages I can think of which have submodules/nested modules and I've not heard people complain about that there.


did not find anything metioned about namespaces/packages support? quite disappointing :( this is the main disadvantage for my tastes


The unqualified imports are definitely a major pain point. In a large Swift app I find myself having to be eternally vigilant organizing my files so that I'm not dumping surprises into the import site.

Makes me appreciate how even Node.js got it right so early on.


I also disliked unqualified imports. I can never get used to languages where you can 'import foo' or 'use foo' and it just dumps a bunch of symbols into the current scope. That combined with Swift's 'implicit member expressions' makes it difficult to read code outside of an IDE (although I understand why they made that tradeoff given some of the long identifiers Swift inherited) [1].

[1] https://github.com/Quotation/LongestCocoa


I've tried using it once and got compiler crash from the get go. Still no response on the bugtracker. Next try in 5 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: