Enums are really the best feature of Swift. I enum all the things everywhere. This is kind of the finishing touch to it.
Binary stability would be nice. I do care about the download sizes of my apps. What's really missing from the language is a way to add custom tags like @discardableResult @objc to define some kind of property. It's such a kludge to set up things like ViewModels while I really would like to have something like:
It might as well be KVO under the hood, no problem. It's just that annotations can be really handy to indicate a ton of stuff that would otherwise be lines of code. I use it in other languages all of the time.
yeah, I'll agree here. Being able to slap an annotation on a method in C# is one of the major pluses. [Authorize] in ASP.NET MVC (Core as well I believe?), the various ones in Entity Framework, etc. Very useful.
> Swift 5 (Early 2019):
> binary compatibility with future Swift releases
I bet this doesn't really happen, though it probably makes the day when it really happens closer.
(I don't just mean they might miss the date, but that
(1) they miss the date by a lot; and/or
(2) they drop binary compatibility from Swift 5; and/or
(3) they claim they've achieved binary compatibility but it doesn't out to be true for long, less than 5 years
)
>>Hash values vary from run to run (so don’t depend on hash values or order)
I must be reading this wrong. One of the requirements for a good hash function is determinism - specifically that any given value in the input space maps to exactly one value in the output space.
So you can see the hash function itself is deterministic given the seed it is initialized with. And the seed is generated at process start time during static initialization in C++ so it is effectively a constant for the lifetime of the process.
You can see in the code that there is a way to make hashing fully deterministic by making the seed generation not use random numbers, but it is ill-advised for a variety of security reasons.
The hashing algorithm can be deterministic however hashing relies on a seed. In most cases you want that to be random to prevent certain side channel attacks. For swift, hashing will be deterministic for the lifetime of the program.
That makes sense. I was thinking in terms of global lookup keys, which would require hashes to persist across runs. When thinking in terms of object equality, the hashes need only live as long as the objects themselves. Namely, for the lifetime of the program.
This is a more accurate description. The hashing algorithm is 100% deterministic, but the inputs include a random seed from the start of the program to prevent various side channel attacks and to prevent folks from erroneously relying on hashmap ordering.
I think the intended reading of ”so don’t depend on hash values or order” is “do not have your code depend on hash values or iteration order of items sorted by hash value”.
So, what you shouldn’t do is:
- storing a hash value on disk, assuming it to be valid in a later run.
- assume that a hash map that has the same content as a hash map of a previous run has the same iteration order (not even if the calls used to construct them are 100% identical, and not even if both were constructed from the same literal).
Note that you can disable this feature, however in most cases it's probably best not to -- the new hasher functionality is actually really cool and a good improvement on the old behavior. If you want to disable it, I think it's a compiler flag.
[* said as someone who's implemented probably a ton of poor hash functions in their time]
Still not as fast and probably Objective-C got a speed boost as well. Objective-C is a very simple language and types are quite explicit. In Swift you have implied types like:
let result = jsonList.map { Class(from: $0) }.filter { $0.isSelected }.map { T(from: $0 }
Or worse:
let myDict = ["key": 1, "key2": 0.2, "key3": "something"]
It has to figure out it's not an int, not a number but an Any dict. Complexity to figure out the real type can quickly ramp up.
Type inference, at least for variables doesn't really take that much time. In practice almost every language does it, they just don't expose it to the developer. For example you can do this in C:
foo->bar()->baz()
And the compiler has to get the type for the bar() result. That's one a small step from:
let x = foo->bar()
Also that dictionary is most likely parsed and assigned a type in the expression whether you specify the variable type or not.
That is way too oversimplified case of type inference. A huge lot of languages don't have this:
let x = (f ? makeDerived1() : makeDerived2()); // no error, x is inferred to be Base
let x = 1;
x *= 2;
x = sin(x) / x; // no error, x is inferred to be Float
The problem is not just that extra computation is needed, it's language design. For example, you probably do not expect a dictionary of Floats and a zero to be a dictionary of Any, but how are you going to implement that in the compiler?
The first example is simple. The only extra work is to find common parent class.
And yes, the next example does need backtracking. I agree that it does need extra work in some cases. Most of the time though it's a very straightforward process.
"Does need extra work in some cases" is an understatement. With type 'inference' that's unidirectional, like in C and C++, you only need to look at each expression once. Thus, in most cases, the whole job is O(n) in the number of expressions.
Admittedly, there are exceptions. For example, it's possible in C++ to create humongous types:
auto p1 = std::make_pair(0, 0); // pair<int, int>
auto p2 = std::make_pair(p1, p1); // pair<pair<int, int>, pair<int, int>>
auto p3 = std::make_pair(p2, p2); // pair<pair<pair<int, int>, pair<int, int>>, pair<pair<int, int>, pair<int, int>>>
auto p4 = std::make_pair(p3, p3); // ...
Also, since templates are Turing complete, it's possible to create situations where type checking a single expression takes arbitrarily long.
But neither of those are situations you're likely to run into by accident. In most real C++ programs, all the types in the program are reasonably simple, and the template system isn't used to do anything especially clever, so the O(n) bound should hold.
On the other hand, full type inference, at least in a language like Swift that allows arbitrary overloads, is inherently a process of exhaustive search over an exponential number of possibilities. Now, as described in this post[1], it ought to be possible in most cases to reduce that exhaustive search to something much simpler – and I actually agree that the Swift compiler could be much, much better at doing so (although it has improved over time). But the post also mentions at least one case where Swift type checking can simulate 3SAT, an NP-complete problem; and I think there are other cases the author didn't think of. So it's really far from straightforward.
You don't understand the problem. It's the possibility of types versus the types declared. Swift will always try to go with the most exact type it can find. C doesn't have types. C is just a bunch of numbers. Swift always has types, even if you don't type them. In case of a large dictionary with multiple types for keys and values it has a ton of possibilities to test for before the real type is determined.
Could be 2 seconds to compile a very large (like 20 items) untyped dictionary declaration in the days of yore. If you would put:
let dict: [String: Any] = [ /* bunch of confusing key-values */ ]
The compile time would go down to 100ms or less. Because it would only check if all of the keys were Strings and all of the values conformed to Any.
I'll just stop you there to consider why foo->bar()->baz() works and how does compiler know "baz" can be referenced. It may not have type hierarchy, but it sure has types.
I wrote type inference like that and yeah, in pathological cases it take some time. But if you spend dev-visible time on a mixed type dictionary, that's just bad implementation.
In addition to Vapor which other commenters have mentioned, Apple itself released Swift-NIO, a port of Netty to Swift that they’re actively developing - https://github.com/apple/swift-nio
It’s somewhat lower-level than web frameworks people may be used to coming from a Ruby/Node background, but it’s pretty powerful.
I’ve used it for a few sites at this point. Haven’t used v3 but it’s been a great resource. It’s a cool experience coding a website with a compiled language, especially swift.
Vapor looks like the best so far but it's still early days. In theory it has the potential to be extremely fast and memory efficient, but for example they only just got a couple TechEmpower benchmarks up a couple days ago and they ranked very low (#162 on the json test).[1]
It goes without saying simplistic benchmarks like this are flawed but at the same time, you have to start somewhere.
JSON performance is super important for a server, far more than for the usual macOS or iOS clients. JSON parsing might be based on the basic JSON parser found in Swift, which isn't fast at all, rather easy to use and secure instead.
It's not super up-to-date but you can see there's definitely some competition even on Swift level in terms of JSON performance.
https://github.com/bwhiteley/JSONShootout
Swift JSON encoding/decoding should approach C/C++ levels of performance when optimized for speed.
There's the Perfect [0] library that I recently used to develop a real-time multiplayer TicTacToe proof of concept for iOS. I used the Perfect web socket functionality on the server. Worked out well enough.
It's on the list of things that would be nice someday, but it will take a lot of time to get right, and it's not a high enough priority right now. There's a handful of other significant features that will come first, like ABI stability, Ceylon/Rust style ownership checking, first-class concurrency/atomics (async/await, etc.).
That said, Swift's C interop will work if you write an `extern "C"` interface to your C++. Obviously it's not ideal, but people have done projects like llvm-swift[0], with LLVM obviously being a C++ project.
AFAIK it’s still in the cards but it’ll come after ABI stability. I have at least one project that’ll make use of this — can’t wait to drop the ObjC layer in it entirely.
Nothing inherently wrong with it, but it's not available on all platforms that support Swift. The new Random APIs use arc4random() under the hood on Darwin IIRC.
There's also no direct equivalent to arc4random() in the new API - you always have to pass a range, which discourages people from using % to reduce the range and introduce modulo bias.
”but it's not available on all platforms that support Swift”
I think that’s a weak argument. arc4random’s source code isn’t platform specific, complex or large and is available under a permissive license, so they could easily put it in the runtime.
I would think they added this because of the modulo argument you give and to give it a better name (there’s nothing wrong with ‘arc4random’ for _a_ random number generator, but _the_ random number generator on a platform should have a simpler name)
Enums are really the best feature of Swift. I enum all the things everywhere. This is kind of the finishing touch to it.
Binary stability would be nice. I do care about the download sizes of my apps. What's really missing from the language is a way to add custom tags like @discardableResult @objc to define some kind of property. It's such a kludge to set up things like ViewModels while I really would like to have something like: