Hacker News new | past | comments | ask | show | jobs | submit login

With safe concurrency and typed throws, Swift is starting to look a lot like a friendlier Rust to me. Honestly pretty excited to take a look at it again, though I doubt it will become my daily driver due to the smaller package ecosystem. Hopefully cross-platform Foundation is a step towards improving that though.



> Swift is starting to look a lot like a friendlier Rust to me.

That’s what i thought and rewrote my cli util in swift. Ran great on macOS, tried to build for windows and found out there’s no well maintained, actively developed http server for windows for swift.

Dont let these wooing crowds fool you


Ha! That seems to answer my next question: what’s its story for web development? Does it have one?


You can't compile for Linux from XCode (defacto IDE for all things Apple) and all web dev runs on linux.

If you like having an IDE instead of scrolling multi-page compiler error dumps in your terminal window - this is a complete non-starter.

The leading Swift web framework (Vapor) suggests you use Docker to build for Linux. I gave it an honest try - their empty starter 'hello world' web server takes more than a minute to compile. Ok, but surely it'll be faster after I make a one liner change? No - their docker workflow has 0 compiler caching - you'll be waiting more than a minute every time.

Complete non-starter.

I ended up installing a VM, installing the swift compiler and that only takes 2-3 seconds to re-compile a 1 liner change (in a print statement, in an empty project, lol). Consider me very deeply unimpressed.

By comparison - a visual studio code + docker + python/ruby/javascript setup is a well oiled, working machine.


You can install other toolkits for Xcode. There’s even an aws toolkit.


Is it possible to write code in Xcode, press compile and have the debugger show me where an error is when compiling for linux.

If yes, please show me the way because I've failed and I've given it an earnest try.


You can use LSP?


> story for web development

Under the hood, Swift-NIO and async Swift is a pretty powerful basis for writing performant servers. Aside from Vapor, there are other small/fast containers like hummingbird.

Not mentioned (surprisingly) is Swift support for wasm/wasi, for deploying code directly in the browser.

Also, "some say" that macros could revolutionize both static and dynamic generation by moving a good portion of site generation to compile time. I'm not aware of any libraries realizing that promise yet.

Finally, Swift concurrent actors have supported distribution for some time, so you can have all-Swift distributed systems, where the client code works with both local and remote servers.


Vapor works great on Linux and macOS. Haven't tried Windows (pretty much only run Steam there these days)


For fast web servers, you could use .NET, especially if you care about Windows. It gives you good ecosystem and consistent experience across all platforms. Even FreeBSD support has improved considerably as of lately. It is already built on top of what Swift, Java and others call "NIO". In .NET it's just "SocketAsyncEngine" users will never know about unless they look for it :)


Did you look at Kotlin?


Swift does not use virtual machine and garbage collection, it competes more to c++ and rust and if Apple is serious about pushing it cross platform that's definitely a welcome move, in fact, I can't wait even though I have never programmed in swift. the main point is that, it's memory safe, and seems much easier to code than rust.


> garbage collection

Reference counting is garbage collection, and it performs significantly worse from a throughput perspective than tracing GC, which is possibly the most common metric for web server type workloads.

It really is not nitpicking, we should just really use tracing GC when we mean it.


There is kotlin native, which generates native code, using the same llvm that c++, rust and swift use. It doesn't have to use virtual machine, it is just one of targets.


not sure if it is 'production ready' and how does its performance/size go comparing to c++/rust/swift, in the end though, it's the ecosystem that matters.


> garbage collection

is reference counting not considered a form of garbage collection?


Nope; no having to pause execution and clean up. Miguel de Icaza (the creator of Mono) explicitly mentions this as one of Swift's key strengths over GC languages like C# during a talk about Swift at GodotCon 2023: https://www.youtube.com/watch?v=tzt36EGKEZo&t=7s&pp=ygURc3dp...


Miguel naturally wants to sell Swift for Godot story.

Also some the Mono issues were related that they never had Microsoft's budget for implementing a bleeding edge runtime.

From Computer Science point of view RC is and will stay a GC algorithm.

https://gchandbook.org/contents.html

https://sites.cs.ucsb.edu/~ckrintz/racelab/gc/papers/levanon...

https://sites.cs.ucsb.edu/~ckrintz/racelab/gc/papers/AzaPet-...


Maybe he should then read a book on garbage collectors that all start with ref counting..

Also, is it “pause execution and clean up” together? As ref counting obviously has to clean up, that’s the whole point - and it actually does so by blocking the mutator thread (the actual program written by the user). Then we didn’t even get to the point where syncing counters across threads are possibly the slowest primitive operation a CPU can do, so if we can’t know that an object will only ever be accessed from a single thread, ref counting has plenty shortcomings. Oh also, nulling the counter in case of a big object graph will pause execution for considerable amount of time (particularly noticeable in case of a c++ program exiting which uses a bunch of shared ptrs)


Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former, to the point where the fact that ARC does not have GC pauses does not help if the whole things is multiple times slower, in many ways it's like Go's """low-pause""" GC design discussions that completely ignore allocation throttling and write barrier cost.

Swift lacking proper performant GC is a disadvantage. Upcoming features solve it by likely enabling more scenarios to sidestep ARC, but their impact on the Swift as a whole, and user applications that use them, is yet to be seen.

It's important to always remember - there's no free lunch.

I'm sad that Miguel de Icaza seems to have a bone to pick with C# nowadays, but it's not surprising given Xamarin story.


> Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former

By which you mean "less CPU cycles on a desktop machine with plenty of memory"?

That's not when ARC is more performant; it's better on smaller devices that are under memory pressure and have swapped out some of your memory. In which case you have to swap it back in to go scan for pointers. And if you're a low-priority daemon then you evict higher priority pages in the process.


Perhaps? You assume GC takes unreasonably more space. It's purely a function of a tradeoff between running it more frequently, tuning heap sizing algortithms, choosing to run them as part of allocation calls on the same thread, sacrificing throughput in the process. GC can be more compact than what you assume. Modern good GC implementation are precise and don't have to mark dead GC roots as live, even within a scope of a single method. .NET and I assume Java GC implementations work this way - that's what "precise" means in "precise tracing GC".


It's not that it takes more space, it's that it has to read memory more often. Not all memory pages have the same cost to read.

Most memory swapping on most people's home computers is from web browsers for this reason; it's part that everyone uses them, but it's also because they're running JavaScript. And they're pretty well tuned, too.


> it's that it has to read memory more often

Wait until you learn about "reads become writes with ARC" :)

ARC as implemented by Swift, on top of ObjCs retain and release, is design that has an advantage in being more simple, but at the same time worse at other key aspects like throughput, contention, memory traffic and sometimes even memory efficiency. Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.


You're lecturing me about my job here. I don't need to learn nothin'.

> reads become writes with ARC

That's not a big problem (it is a problem but a smaller one) since you can choose a different tradeoff wrt whether you keep the reference counting info on the same page or not. There's other allocator metadata with the same issue though.

A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

> Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

It was Objective-C that had the GC (a nice compacting one too) and it failed mostly for that reason, but has not come back because of the performance issues I mentioned.

> Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.

Oh, people definitely want to use it in the same places and will if you don't stop them. See how everyone's writing apps in Electron now.


> A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

Moving GCs solve it much more elegantly, in my opinion, and Java is just so far ahead in this category than anyone else (like, literally the whole academic field is just Java GCs) that not mentioning it is a sin.


> literally the whole academic field is just Java GCs

Not necessarily a good thing. While reading Java-related papers I found myself constantly thinking "damn, they wrote a paper for something that is just 2.5 smaller pull-requests in dotnet/runtime". I wouldn't put the modern state of academia as the shining example...


What are you even talking about? C# has a famously simplistic GC which is basically one big, 1000 lines file. C# has very different tradeoffs compared to java, it pushes complexity to the user, making their runtime simple. Java does the reverse, having the language very simple, but the runtime is just eons ahead everything else. Like, call me when any other platform has a moving GC that stops the world for less than a millisecond independent of heap size like ZGC. Or just a regular GC that has a similar throughput as G1.


Historically, at its inception, .NET's GC was written in LISP and then transpiled to C++ with a custom converter. It is still a single-file implementation, but I'm afraid it's not 1000 but 53612 lines instead as we speak :)

Well, that's not one file per se and there is more code and "supporting" VM infrastructure to make GC work in .NET as well as it does (it's a precise tracing generational moving GC), so the statement that it pushes complexity onto the the user and underperforms could not be further from the truth. None of the JVM GC implementations maps to .NET 1:1, but there are many similarities with Shenandoah, Parallel, and some of the G1 aspects. In general, .NET is moving in the opposite direction to Java GCs - it already has great throughput, so the goal is to minimize the amount of memory it uses to achieve so, while balancing the time spent in GC (DATAS targets up to 3% CPU time currently). You also have to remember that the average .NET application has much lower allocation traffic.

In addition to that, without arguing on pros and cons of runtime simplicity (because I believe there is merit to Go's philosophy), .NET's CoreCLR implementation is anything but simple. So the statement does not correlate to reality at all - it makes different tradeoffs, sure, but together with CIL spec and C# design it makes historically better decisions than JVM and Java which lend themselves into more naturally achieving high performance - no interpreter stage, only intermediate compilations have to pay for OSR support, all method calls are non-virtual by default, true generics with struct monomorphization and so on and so forth. Another good example of the runtime doing truly heavy lifting on behalf of the user are byref pointers aka 'ref's - they can point to _any_ memory like stack, GC heap, unmanaged or even device mapped pages (all transparently wrapped into Span<T>!), and the runtime emits precise data for their tracking to update them if they happen to be pointers to object interiors without imposing any measurable performance loss - it takes quite a bit of compiler and GC infrastructure to make this work (exact register state for GC data for any safepoint for byrefs, brick tables for efficiently scanning referenced heap ranges, etc.).

List of references (not exhaustive):

High-level overview (it needs to be updated but is a good starting point): https://github.com/dotnet/runtime/blob/main/docs/design/core...

Implementation (the 53612 line file): https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...

.NET GC internals lectures by Konrad Kokosa (they are excellent even if you don't use .NET): https://www.youtube.com/watch?v=8i1Nv7wGsjk

Articles on memory regions:

https://devblogs.microsoft.com/dotnet/put-a-dpad-on-that-gc/

https://maoni0.medium.com/write-barrier-optimizations-in-reg...

https://itnext.io/how-segments-and-regions-differ-in-decommi...

Articles on DATAS:

https://github.com/dotnet/core/blob/main/release-notes/9.0/p... (quick example of the kind of heap size reduction applications could see)

https://maoni0.medium.com/dynamically-adapting-to-applicatio...


I did write ‘simple’, but obviously meant simpleR. A performant runtime will still require considerable complexity. Also, C# doesn’t underperform, I never said that — partially as the whole platform has access to lower level optimizations that avoid allocating in the first place, as you mention (but Span et alia does make the language considerably more complex than Java - which was my point).

But on the GC side it quite objectively has worse throughput than Java’s, one very basic data point would be the binary tree benchmark on benchmark games. This may or may not be a performance bottleneck in a given application, that’s besides the point. (As an additional data point, Swift is utterly bad on this benchmark finishing in 17sec, while java does in 2.59 and C# in 4.61), due to it having reference counting GC, which has way worse throughput than tracing GCs). But you are the one who already linked to this benchmark on this thread, so you do know it.


Do Go slices make it more complex? :)

Span<T> makes the language simpler from both the user and C# to IL bytecode point of view, all the complexity is in the runtime (well, not exactly anymore - there's ref T lifetime analysis now). On that note, Java does not seem to have a generic slice type, like ArraySegment<T> which predates spans. I can see it has ByteBuffer, CharBuffer, IntBuffer, AbstractEnterpriseIntProviderFactoryBuffer (/s), etc from NIO as well as sub-Lists(?) and using Streams in the style of LINQ's Skip+Take.

Spans are very easy to use, and advertising them as advanced type was a short-lived mistake at their inception. Since then, they have gotten adopted prominently throughout the ecosystem.

After all, it's quite literally just

  var text = "Hello, World!".AsSpan();
  var hello = text[..text.IndexOf(','));
  var twoChars = hello[..2];
And, to emphasize, they transparently work with stack buffers, arrays, unmanaged memory and anything in-between. You can even reference a single field from an object:

    var user = (Name: "John", DoB: new DateTime(1989, 1, 1));
    ProcessStrings(new(ref user.Name));

    // Roslyn emits an inline array struct, from which a span is constructed
    // It's like T... varargs in Java but guaranteed zero-cost
    // In C# 13, this gets support of params so many existing callsites
    // that used to be params T[] start accepting spans instead,
    // completely eliding allocations or even allowing the compiler
    // to reference baked into binary constant arrays
    ProcessStrings(["hello", "world"]);

    void ProcessStrings(Span<string> values) { /* ... */ }
On binary-trees - indeed, the results are interesting and Java demonstrates consistently lower CPU time cost to achieve similar or higher throughput (look at benchmark results distribution). It is a stress-test for allocation and collection throughput, yes. However, Java benchmarks also tend to consume consistently more memory even in allocatey scenarios: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

In any case, I have asked around for more data on detailed comparison of heap profiles between G1, Zgc and Parallel and will post them here if I get a response to provide more context. It's an interesting topic.


If your point of reference are Objective-C and Swift only, and you have not looked at how .NET's or Go's (which makes very different tradeoffs w.r.t. small memory footprint) GCs work, it might be interesting to re-examine prior assumptions in light of modern designs (I can't say Go is modern per se, but it is interesting nonetheless).

Also, .NET tends to heavily zero memory in general, as the spec dictates that fields, variables, arrays contents, etc. must be initialized to their default values before use (which is zero). Compiler can and will elide unneeded zeroing where it can see, but the point is that .NET's heaps should compress quite well (and this seems to be the case on M-series devices).


There are popular apps written in C# on the platform, but they're Unity games, which use il2cpp and I believe still use Boehm gc. I think this demonstrates a different point, since even a bad GC apparently doesn't stop them from shipping a mobile game… but it is a bad GC.

(Games typically don't care about power efficiency much, as long as the phone can keep up rendering speed anyway.)

> Also, .NET tends to heavily zero memory in general, as the spec dictates that fields, variables, arrays contents, etc. must be initialized to their default values before use (which is zero).

Same for most other languages, but there's a time difference between zeroing on free and zeroing on allocation. Of course, once you've freed everything on the page there are ways to zero the page without swapping it back in. (just tell the OS to zero it next time it reads it)


Yeah, Unity has terrible GC, even with incremental per-frame collection improvement. It's going to be interesting to look at the difference once they finish migration to CoreCLR.

If you'd like to look at a complex project, you can try Ryujinx: https://www.ryujinx.org. It even has native integration[0] with Apple Hypervisor to run certain games as-is on ARM64 Macs. There's also Metal back-end in the works.

Other than that, any new .NET application runs on MacOS provided they don't use platform-specific libraries (either something that uses Linux dependencies or kernel APIs or Windows ones). My daily drive device is an MBP.

A side-note is that on MacOS .NET does not use regions-based heaps yet and uses older segment-based ones. This has implications in terms of worse memory usage efficiency but nothing world-ending.

[0]: https://github.com/Ryujinx/Ryujinx/tree/73f985d27ca0c85f053e...


Man the term must have changed since I was in school; i thought garbage collection was a much more general concept than a specific tactic to achieve this end of automatic memory collection. Pity, it was a useful term.

It's worth noting many others also consider automatic reference counting to be a form of gc, albeit one with different strengths and weaknesses than stack- and heap-scanning varieties


Memory safe and, with Swift 6, data race safe.


there is Kotlin Native - "Kotlin/Native is a technology for compiling Kotlin code to native binaries which can run without a virtual machine."

https://kotlinlang.org/docs/native-overview.html


> Swift does not use virtual machine and garbage collection, it competes more to c++ and rust

Doesn't it use ARC by default?


Which is reference counting, not garbage collection. Ref counts free when count = 0. Garbage collection scans all object pointers and looks for loops / no missing pointers.


That's tracing garbage collection. Reference counting is another type of garbage collection. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...


Reference counting is not tracing garbage collection. To also quote a Wikipedia Link: „The main advantage of the reference counting over tracing garbage collection is that objects are reclaimed as soon as they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object.“

+ https://en.wikipedia.org/wiki/Reference_counting


> Reference counting is not tracing garbage collection.

???

They didn't say it was.


Of course reference counting is not tracing garbage collection. I never said it was. The comment I replied to claimed reference counting was not garbage collection at all and seemed to think tracing garbage collection was the only kind of garbage collection. Reference counting and tracing garbage collection are two different types of garbage collection.


Reference counting is a kind of garbage collection.


it does, I thought ARC is more performant than GC and has no stop-the-world issue,thus not a GC


Usually, it's the other way around: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

binary-trees is almost completely dominated by the time spent in allocator code, and stresses its throughput. This benchmark showcases how big of a gap is between manual per-thread arenas, then tracing generational multi-heap GCs, then ARC and more specialized designs like Go GC. Honorable mention goes to BEAM which also showcases excellent throughput by having process-level independent GCs, in this case resembling the behavior of .NET's and OpenJDK GC implementations.


A tree is indeed a bad fit for RC; so is anything else where you have multiple references to something but know there is a single real owner.

I'd suggest keeping strong references to all the tree nodes in an array, then having everything within the tree be unowned. Basically fake arena allocation.

Actually, the way it's written:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

is a common way you see toy data structures code written, but it's inefficient (because pointer chasing is slow) and there's better patterns. If you use the arena method above, you could use indexes into the arena. If not, intrusive data structures (where the references are inside Node instead of inside Tree) are better.


Pointer chasing is irrelevant here. It takes <= 15% of the execution time, and CPUs have gotten good at it. If it takes more - it speaks more about the quality of the allocator which has poor memory locality. As noted in my previous comment, it is dominated by the time spent in the allocator/GC code.

Please read https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The requirement to implement the same algorithm with the same data structure is what makes this benchmark interesting and informative. Don't tell me "allocate parent object A, allocate child objects C, D and E and assign them to A's fields, then allocate array F and G, and assign them to D's fields" isn't the bread and butter of all kinds of application logic, something that this benchmark stresses.


Some CPUs are good at it, but most aren't. (Apple's are.)

But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Also, I don't recommend intrusive structures merely because they'd be better for language games. I think they're better in general ;)


> But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Please read 'binary-trees' description and submission rules (#2). You are missing the point(er).


ARC is a variation of GC. Besides, a tracing GC doesn't have to stop the world at all.


It has nowhere near the performance characteristics of those languages. It could, but it doesn’t. Look up a variety of language benchmarks. It’s typically ranked around Python/Javascript. You can get as fast as C but the code is very atypical.


There's no way it's close to Python. Where are the benchmarks?

https://github.com/sh3244/swift-vs-python

Shows a huge difference, as expected for a typed memory-safe compiled language using LLVM versus an interpreted language with a global interpreter lock.



The thread you just posted has a bunch of posts indicating this was not the actually the same program in Python and Swift; further, the Swift version was written poorly. Plus, the graph in the final post shows whatever Swift version someone ran tests on as much faster than Python.


It is slower, like probably ~3x but lets not exaggerate that it ranks around python

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Edit: Kotlin is perfectly fine for _just_ web-servers, Vert.X is great. On systems programming, read below:

All JVM languages are not viable by definition for this domain. Object oriented and heavily abstracted nature of the underlying runtime implementations prevents their effective usage in systems programming due to lack of fast FFI, structs, particularly so of custom layout, and the historical aversion of the ecosystem to low-level features.

Kotlin native does not count because presently it has 0.1-0.001x performance of OpenJDK, it is that bad, and I assume is subject to the common subset of features that must also be expressible with JVM.

.NET, especially with compilation to native statically linked binaries (NativeAOT) is an option, and I believe, due to ecosystem maturity as well as very heavy focus on performance in all recent .NET versions as well as continued improvement of low-level features (portable SIMD, byref pointers with simple lifetime analysis, static linking with C/C++/Rust/etc.), it is a strong contender. Unlike Java, C# has great systems programming story, after all, it was influenced as much by C++ as it was by Java, sadly many only ever think about the latter.

However, I'm looking forward to Swift 6. Once it is out, I'd love to see it offer more opportunities at ensuring static dispatch and generic monomorphization (in .NET, generics with struct arguments are always monomorphized like in Rust, so you have tools for zero-cost abstractions) and happy paths allowing to bypass prohibitive cost of ARC with new annotations. By using LLVM, Swift has theoretically great performance ceiling, even if it does not deliver on it just yet, losing to C# by a good margin on the more complicated code due to ARC and dynamic dispatch. But because Apple seems to be invested in using it for tasks that will require addressing these shortcomings, it is pretty exciting to see where they will take it.


Isn't Kotlin based on JVM and Swift is natively compiled? That's a pretty significant difference and I'm not aware of any "to native" compiler for Kotlin like the NativeAOT approach exists for .NET...



There are in fact two "to native" compilers for Kotlin, the one for Kotlin only is called Kotlin Native but you can also use graalvm native-image to compile any JVM language to native.


How hard is it to build an HTTP server?

(Yeah, this is a dumb question but I'm asking anyway)


Do you mean from first-principles, or letting someone else do the work and use a framework?

Apple did the hard work, https://github.com/apple/swift-nio.

If you just want a big framework to launch an API, that's been around for years: https://vapor.codes


Sneakily hard, actually. There's different versions of HTTP (of course), so pick your target. But when you hit HTTP/2.0, it's not a simple request/reply model (if HTTP/1.1 can be described as such). The intermixing of various client headers and what they imply to server behavior, handling of the streams and when to open vs. when to close, http/2 multiplexing, etc. Don't forget HTTP/3 which uses the QUIC protocol (UDP based) instead of TCP.

Interestingly though, a trivial HTTP server is actually very easy to implement as well. A very crude HTTP/1.0 server (or maybe even a limited scope HTTP/1.1 server) can actually make for a fun afternoon project. Like minimal (or no) concurrency support, all TCP connections closed after the request/response cycle, GET only (or maybe POST only), etc.

So it's a mixed bag of what you want and how you define an HTTP server.


I can't think of a good reason to want to implement the complex parts. Write an HTTP service if you must, but make it HTTP/1.0 (or 1.1 for keepalive) and stick it behind nginx to do the newer versions and SSL termination.

(I also think all HTTP services should if possible be written in a shared-nothing CGI type language, and not embedded in your random stateful program. This way you can't accidentally leak info across user sessions.)


Both of these are great points. I do really appreciate an nginx (or other load balancer) front end. Or even cloudflare or whatever AWS/Azure offers. A simple horizontally scalable HTTP/1.1 backend with a reverse-proxy that can uplift your app is a great strategy.

Also, your comment about "shared-nothing" is interesting too. It surely doesn't hurt to think about it in this way, but likewise, might be out of scope for a simple web server concept (for example, if you're not even really supporting sessions at all).


That all depends on your requirements. A few hundred lines of code will get you pretty far, but there are about 100 optional features in common use.


I’ve migrated to swift for some nontrivial projects that were formerly C++. Quite happy so far, and didn’t find rust nearly as pleasant when I tried the same thing there. I don’t want explicit memory management very often, so ARC works great for me. Haven’t had any issues with portability either, although I don’t test often on windows so I’m not confident there but Linux and Mac have been great.


I'm glad you found something you like. I just want to make it clear that the things about Rust that make it "unfriendly" are also the things that make it able to do things other languages can't do, like compile-time memory safety. Depending on what you are making, that might make little difference. I just wanted to make sure you appreciated what Rust can do that other languages can't.


This is an article about another language that can do that.



Swift does have borrow checking: https://www.swift.org/blog/swift-5-exclusivity/

Basically the difference is that Swift's is more implicit, happens more at runtime, and it will make some programs work via copy-on-write that Rust would reject.

So that's obviously more limiting. It's more flexible when you can allocate memory freely, but it doesn't work if you can't.


> happens more at runtime

Bingo, that's the difference. That's why I said "compile-time memory safety". This is what Rust gives you for your trouble, zero (runtime) cost for memory safety.


Curious! In what ways do you do you see swift as friendlier than Rust? I perceived it as functionally equivalent, although Swift had a lot more "magic" involved and it was less clear how things might work memory-wise.


To me, Swift has better ergonomics for most people.

Ref counting by default makes most programs easier to deal with.

Guard let (though recently somewhat introduced in rust) are much more friendly ways to unwrap optionals.

Being able to provide parameter defaults and aliases.

Easy passing in of callbacks. Easier async.

Null chains with the Question mark operator.

I really like rust, but it’s much faster to get to a working program in Swift.

And with the new CXX interop, I now reach for Swift by default since I have to deal with a lot of C++ libraries.


> Easier async.

I was on board until this one. Async is a rough spot for rust, but I find the async strategy swift went with absolutely baffling and difficult to reason about.


I’m curious, what puts you off of them? Actors are pretty standard ways to do it, and I feel like most of the successful rust implementations are actor based as well.


More magic (thus less required explicitness) and less involvement with memory management are typically considered as friendly traits in programming languages.


> More magic (thus less required explicitness) and less involvement with memory management are typically considered as friendly traits in programming languages.

Really depends on the context. I really, really, really hated this instinct in the ruby on rails community when I was still doing that. Magic is nice until it doesn't work the way you expect, which is when it becomes an active liability.

I really don't spend much time thinking about memory management in Rust, but I can certainly understand why one might be happy to not have to design around ownership and lifetimes. I really like the explicit nature of it, though, makes it super easy to read and reason about code you've never seen before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: