I saw Scala taken over and destroyed by the FP fanatics who want to turn every language into Haskell. I hope they don't do the same to Rust. Just use Haskell and leave the rest of us alone!
It sounds silly but its true.. as you increase the complexity of the language, the hardcore users making the libraries adopt those features, which means users have to adopt those features as well. Pretty soon everyone is talking about monad transformers and HKTs and it takes 4 hours to write something that would take 15 minutes in Python.
Then the wave of FP consultancies and trainings show up so they can bill you $10000s on workshops and conferences to teach your devs monads.
> I saw Scala taken over and destroyed by the FP fanatics
I suspect that the above reflects a very personal experience rather than something general. I have been using Scala for 10 years and never used monad transformers. I find Scala code usually easy to write and to read. At this point I wouldn't trade it for any other language.
From where I stand, Scala was neither "taken over", nor "destroyed by FP fanatics". It is not Haskell, and the upcoming Scala 3 is not going in the direction of being more like Haskell. Scala has always been about supporting both object-orientation and functional programming. It's still the case with Scala 3, and it's getting better and better.
I have worked in multiple Scala shops and contributed at the highest levels to the Scala ecosystem and my experiences confirm that this rotten attitude is very real and increasingly the norm, as everyone but the fanatical FP-ers have long-since moved on to other more professional/productive circles.
With the exception of shops using Scala exclusively for Spark, someone entering the ecosystem can expect to be constantly talked down to if you dare to use something deemed by the FP hivemind to be evil (so, anything other than pure, immutable, FP-style with all side effects controlled, no exceptions or nulls, no inheritance (only typeclasses), etc). They'll be derided and skoffed at constantly for being "just a java++ programmer".
Meanwhile, for all their smugness, the FP community has achieved nothing at all that has reached beyond to the world outside of Scala. The projects successful in bringing in developers to Scala have been decidedly in the disparaged lightly-FP/"Java++" style: Spark, Kafka, LinkerD (1.X -- they rewrote 2.X in Go/Rust), Flink, Akka, PlayFramework, Twitter's stack, Prisma, etc. Shockingly, the predominant view among these delusional pure-FP-obsessives, which usually goes unchallenged, is that the creators of Spark don't know anything about Scala or are bad engineers!
Scala 3 is a joke and will do the exact opposite of the stated goals from years back. It was supposed to streamline and simplify the language, remove gotchas, add a few high-impact simple features like Union types and trait parameters. Now it's a grown into a monstrosity of complicated features that your average dev will never use. Rather than streamline and simplify Scala 2's issues with having dozens of ways to do the same thing, it introduces multiple new dimensions by which people can do things in multiple ways (legacy implicit system vs new "given" syntax; braces syntax vs indentation-based).
Control structures helpful for imperative programmers are being removed (you cannot early return from a for loop without a huge hassle), `do-while` is removed, etc. These breakages were not pushed back against because the only people still around are FP'ers who don't want people using loops in the first place.
New type-system features are being added without even knowing if there's a possible use-case. The whole thing is a mess.
I'm not sure why Prisma is called out here. But posts like this bubble up in our Slack, so I thought I should respond...
Prismas query engine is a complex piece of code and receives very little outside contributions. Moreover, Prisma only has bindings for JS/TypeScript and Go at the moment, so there is no way to consume Prisma from Scala. As a result, very few Scala developers know about Prisma.
We enjoyed Scala as a language, and the massive JVM ecosystem is a huge benefit. That said, we were forced to rewrite the query engine in Rust as we had a need for a more modular architecture enabling us to embed parts in JS and Go libraries. We looked at the Scala Native and Graal projects (spent 6 months building a prototype), but neither delivered a sufficiently low memory footprint. The Prisma2 rewrite to rust is a much more stable product, and we love the Rust language.
All the best to both the Scala and Rust ecosystems. Hugs.
I am not sure how much of that is worth responding to. You obviously have a strong opinion and have been burned and appear to have pent up anger. I don't share your opinion.
I use Scala, I love it, I am looking forward very much to Scala 3, and I find help in the community when I need it. And I am not a pure FP programmer, although I like lots of the ideas of FP.
> everyone but the fanatical FP-ers have long-since moved on to other more professional/productive circles
I for one haven't and I am not a "fanatical FP-er". I bet I am not the only one.
> Scala 3 is a joke […] it's a grown into a monstrosity of complicated features that your average dev will never use
I don't think that's true at all.
> New type-system features are being added without even knowing if there's a possible use-case.
I don't know what you are referring to. From the doc, I note:
- intersection types, which are essentially a better way (commutative) of doing `A with B`
- union types, which several languages now have, including TypeScript, and which are definitely a very useful feature (in particular for Java and JavaScript interop, but there are other use cases)
- dependent and polymorphic function types, which are just an extension of what was possible before with methods
- match types and type lambdas, which I cannot comment on
> The whole thing is a mess
I obviously don't see things with the same eyes you do. I am really excited about Scala 3 and I do think it improves the language significantly, as it should.
Scala 3 also aims at solving a very real issue with previous releases of Scala, namely that there is a solid binary compatibility story, within Scala 3.x, but also between Scala 2 and Scala 3.
> the only people still around are FP'ers who don't want people using loops in the first place
I can't remember the last time I used `do-while`. But you can rewrite this trivially to a `while`, which is not going away (in fact, I think that Scalafix will do that for you automatically). In any case, community questions about things like mutability, loops, or more imperative features, typically receive answers to the effect that it's all right to use such constructs, especially in the small (like a the level of a single function). These features are part of Scala and generally accepted. They are used by the Scala standard library, and regularly acknowledged by Martin Odersky. A quick code search finds such uses in libraries such as Circe and Cats.
Regarding non-local `return`, you are also overlooking the fact that this is a frequent cause of confusion and errors. Removing this feature has little or nothing to do with FP fanaticism.
Personally, I can only encourage programmers to look into Scala. It is a fantastic language with great features. It also has an incredible JavaScript story with Scala.js, which is rock-solid. The transition to Scala 3 will be a good time for newcomers to look at the language and its community with a fresh look.
I’ve been interested in Scala recently, although I’ve heard it’s been unfortunately pigeonholed in industrial use. Would you say it’s worth takin up now or wait for Scala 3?
Haha, I'm the wrong person to ask. I was fortunate enough to be in a hands on senior role, and I promoted a very light Java++ style, to be learned in a 2 hours seminar: immutable collections, case classes, pure functions, impure logging & exceptions, sparingly used interface polymorphism, recursion, filter/map/flatmap/fold. Scala: The Good Parts. To this day I have not learned the first thing about implicits or type variance, and probably there are many many many more Scala features I haven't even heard of. The biggest challenge was helping the rest of the team avoid writing inscrutable sbt plugins. The plus side, it makes little difference Scala2 vs. Scala3. The downside, some teams may scoff at such a mundane approach.
It doesn't matter much in my opinion if you start with Scala 2 now or wait for Scala 3. Scala 3 is 95-99% compatible with Scala 2. There is very little that you would learn in Scala 2, especially as a beginner, that will be truly obsolete when 3 is out (an exception might be symbols, which had very little use anyway). In addition, most existing codebases are in Scala 2 at this time.
(I, and my company, use Scala as a general-purpose language, targeting both the JVM and JavaScript, by the way.)
> Isn't the FP community in the Scala world at war?
I have heard that there have been one or two difficult individuals in certain segments of the FP community, especially 5-10 years ago. I understand that this caused the Scalaz/Cats split. But I have never really needed to care about it and it seems to me that this is history. I could be wrong but it's my personal experience.
Thank you for sharing your experience. Back then when I used Scala this was really turning me off Scala. Good that it all settled and that the Scala community can focus on the more important tasks again.
"Any method of abstraction that you have internalized becomes conceptually free to you." (So for the sake of others, choose wisely which you will expect maintenance programmers to have internalized!)
If you've internalized all of the FP ideas, then adding them to a new environment seems useful and seems to cost nothing. It is hard to keep track of how much burden you are adding for people just learning the environment because, for you, it is free.
It's worse than that. Getting used to something doesn't make it free. It simply makes you unaware of the cost you're constantly paying. Having convoluted abstractions really screws up your thinking. I've seen this first hand with class-oriented programming, Enterprise Java and design patterns. People were 100% convinced they are writing awesome code, when in fact they were wasting time and creating dysfunctional monstrosities. The same illness seem to be re-emerging in FP space thanks to the obsession with types.
Rust uses "traits" to do both generic-like things and inheritance-like things. It's not clear this was a win. The result seems to be a system which does both badly.
Rust generics are very restrictive. They're not at all like C++ generics. It's not enough that every operation on a type needed by a generic be implemented. The type has to be part of a single trait that provides for all those operations. It's like C++ subclassing. So generics over types defined by others can be impossible to write in Rust. This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.
Traits and fixed array bounds do not play well together. This is considered a bug, and it's been a bug since at least 2017. Generic parameters can only be types, not numbers. This led to a horrible hack involving a Rust crate which has types U1, U2... U50 or so, so small numeric constants can be bashed through the Rust type system.
Not seeing the benefit of all this.
I have to go struggle with another non-helpful "lifetime `'static` required" message from the borrow checker now.
> This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.
The compiler can verify that types are satisfied. It cannot verify that you agree on what those types and operations mean.
For example, `Iterator` isn't very useful without being able to agree on what `Iterator::next`'s `Option` means. So implicit structural traits á la Go is somewhere between useless and actively harmful.
You can have nominal traits while allowing orphan instances (that is, implementations of foreign traits on foreign types). Scala and Haskell both have that. But IMO neither has a good solution for the conflicts that inevitably arise in that case.
const generics are very close to being done, which will fix the array issue you mention by allowing code to be generic over integers as well as types. This is already in for built-in arrays and traits but user traits currently require nightly.
I agree with you about the orphan rule being annoying and wish they’d relax that, but I much prefer traits to C++ duck-typed templates, if only because it makes error messages from passing an unsupported type to a generic function clear and concise as opposed to the thousands of lines of confusing output you often get in C++.
> Traits and fixed array bounds do not play well together. This is considered a bug, and it's been a bug since at least 2017. Generic parameters can only be types, not numbers.
> Rust uses "traits" to do both generic-like things and inheritance-like things. It's not clear this was a win. The result seems to be a system which does both badly.
IMO the fact that Rust doesn't do inheritance well is a feature, not a bug. My understanding is that most people even in OOP circles have realized that composition is better than inheritance, and I see Rust's choices around it as a reflection of that.
> It's not enough that every operation on a type needed by a generic be implemented. The type has to be part of a single trait that provides for all those operations.
Maybe I'm misunderstanding, but do you know that a generic can combine multiple traits? For example:
fn foo<T: Copy + Add + Sync>(x: T) {
...
}
Here T must implement Copy and Add and Sync.
Maybe what you're talking about is the fact that methods' identities aren't fully described by their names + types, but also by their trait? i.e.:
trait TraitA {
fn foo(&self, x: i32) -> i32;
}
trait TraitB {
fn foo(&self, x: i32) -> i32;
}
struct Foo {
}
impl TraitA for Foo {
fn foo(&self, x: i32) -> i32 {
x + 1
}
}
fn requires_b<T: TraitB>(x: T) {
x.foo(12);
}
fn main() {
let f = Foo { };
requires_b(f);
}
error[E0277]: the trait bound `Foo: TraitB` is not satisfied
--> src/main.rs:25:16
|
19 | fn requires_b<T: TraitB>(x: T) {
| ------ required by this bound in `requires_b`
...
25 | requires_b(f);
| ^ the trait `TraitB` is not implemented for `Foo`
This was an opinionated decision made by the Rust team - which I think was a good decision - to address the fact that it's possible to have unwanted overlaps between trait method signatures. Just because fn foo(&self, x: i32) -> i32 is defined for a struct, doesn't mean it's the same foo that you're wanting to call. Distinguishing things by trait strengthens the contract. It also avoids the diamond problem. You're right that this isn't directly related to "safety" in a memory sense, but it's part of an overarching philosophy of Rust that encourages intentionality and discourages footguns.
> Generic parameters can only be types, not numbers.
The thing I found confusing with traits and their generic system was the choice to include "associative types" for out types instead of also defining them in a generic bound:
trait Iterator {
type Out;
fn next() -> Option<Out>;
}
vs
trait Iterator<T> {
fn next() -> Option<T>;
}
Multiple ways to do something has made it harder to understand.
> This has no safety benefit, since generics are resolved at compile time and all safety tests can be made after generic expansion.
Rust has macros and procedural macros for the "check after generic expansion" use case. It's a good thing that this is separate from generics; it side-steps a whole lot of incidental complexity that's seen in C++.
I've never really grasped FP and I seem to be in an eternal state of confusion about monads but I'm excited about GATs in Rust. They'll allow for traits that hide implementation details like `trait Foo { type Bar<T> = T | Box<T> | Arc<T> }` without dynamic dispatch or ugly hacks.
I'm not sure if you meant to imply otherwise, but using Haskell without reaching for the most advanced techniques is also a very valid and viable choice.
Definitely, but I'd imagine this is being frowned upon because the realistic (and non-fanatic) folks who use Haskell, use it mostly for these advanced features, I'd imagine. Could be wrong.
Can not resist. The tension between 'basic feature set' and an admittedly superficial reading of the docs is very funny. The manifesto links to https://github.com/commercialhaskell/rio#readme and urges us to use the rio library to get started. Upon opening the rio link and scanning for a list of the 'basic feature set', I stumble upon the first block of quoted code. After removing 39 eoln characters in respect for the HN audience, it reads:
39 language extensions just to get started. This screams 'incredibly complicated', even if perhaps reality is rather more mundane. Consider the 40th language extension: GradualTyping, so perhaps those that would rather write code about data than about types using a half baked and evolving type language (which taken to its logical conclusion will have to become a full fledged theorem prover in the Coq / Idris / Agda / Lean lineage anyways) could get their jobs done.
This is a common response to Haskell language extensions. It comes from a misunderstanding of what a language extension is. A Haskell language extension is not "something that radically changes the language"; it is "a small, self-contained, well-tested piece of functionality that for whatever reason wasn't part of the Haskell 2010 standard". In any other language a "language extension" would just be "a feature of the language".
I agree its doing great for now. This new addition just feels like a minor cause for concern though. Time will tell how things play out and which faction wins - those who actually build shit or architecture astronauts making towers of monads.
Your claims are at odds with each other. If the people who are passionate about a language are the ones who pursue its advanced features (and therefore somehow force them on the rest of the world), who then is left to "actually build shit"? Why, if simple functionality is such a strong requirement of productivity, aren't those who stick to simple features productive enough to maintain an ecosystem without the Architecture Astronauts building all the infrastructure?
It doesn't track that it's possible to simultaneously ruin a language by sabotaging all of its major libraries with novel features if writing code using novel features is actually incredibly difficult. It certainly doesn't track that, once you have somehow sabotaged a language's major libraries, that nobody bothers to "fix" them by introducing a new, simpler version.
> If the people who are passionate about a language are the ones who pursue its advanced features (and therefore somehow force them on the rest of the world), who then is left to "actually build shit"?
Some people are passionate about the language itself, and programming language theory in general. Others are passionate about solving whatever particular problem their project solves.
A simple thought experiment - think about the most widely used libraries and tools across the whole developer ecosystem. How many are built in Haskell? I count maybe one, Pandoc. How many are built in terrible code bases and languages but chug along anyways? I count thousands. How many wildly successful companies have pristine code bases and how many have trash fire code bases that chug along anyways?
>A simple thought experiment - think about the most widely used libraries and tools across the whole developer ecosystem. How many are built in Haskell? I count maybe one, Pandoc.
Purescript and Elm are two more. If you don't count languages, then Xmonad and Darcs are another two. Both Github and Facebook's efforts in mass source-code searching are written in Haskell (though Facebook's is not really released to the whole developer ecosystem).
This is also a misguided thought experiment - Haskell is relatively unpopular anyways (as Rust is). It has a reputation for being difficult to learn (as Rust does). How many tools across the developer ecosystem are written in Rust? Ripgrep, and maybe Alacritty. Does this reflect badly on Rust? No, it's immature and needs a lot of developer support - which is why much of Rust's development effort is in new libraries.
Does a(n alleged) lack of tools reflect badly on Haskell? No, both because it was for a long time considered an academic language, and because Haskell's great successes have also been outside of the "developer ecosystem" - in webservers, for example.
And none of this addresses my original objection to your point: why, if simplicity is so productive, is it not easy to replace complicated libraries with simpler versions? In Haskell, the answer is that the simpler versions are much less powerful, and the power of advanced languages features is actually a boon for productivity, because encoding your invariants in a good type system saves you work elsewhere. That's the whole benefit of Rust's borrow-checker over C++. There is no real risk of Rust getting "too complicated", because these advanced concepts still let people build shit.
I don't know if this is still widely embraced, but Haskell's motto has traditionally been "Avoid success at all costs." It was meant to be a language that embraced PLT and experimented with cutting-edge techniques, so it's not terribly surprising that it's produced more PLT experiments and hasn't produced as much consumer software as, say, Go, which had essentially the opposite philosophy.
Fortunately, when you look at the latest version, Scala 3, you will see that they put more effort into simplifying the language and removing things that people complained about than adding new things.
To be a bit more concrete:
- Remove certain ways of using implicits and making them less confusing
- Cleaning up the language core and removing features (such as impure macros or constructs that are rarely used or confusing)
- Making syntax easier for many cases without adding new features
- Adding union types (comparable to typescript). This is a new feature and not a small one, but I think it will make the language easier to use for many people.
And I think union types and intersection types are a very practical thing to have. So all in all, I'm happy to see that Scala becomes more practical and less esoteric with this release.
So scala is neither taken over or destroyed and the author of this post stresses multiple times that he does not suggest that this approach by adopted by the rust community, even concluding with:
>But Rust is not Haskell. The ergonomics of GATs, in my opinion, will never compete with higher kinded types on Haskell's home turf. And I'm not at all convinced that it should. Rust is a wonderful language as is. I'm happy to write Rust style in a Rust codebase, and save my Haskell coding for my Haskell codebases.
I haven't been paying too much attention to the language development but AFAIK the vast majority of the FP in scala is in libraries (cats/scalaz) and not baked into the language.
Sure the language _has_ higher kinded types and implicits can be twisted to support type classes, but neither of those seems to have been forced in the language from FP fanatics, but rather the language has always included some set of advanced features.
Why even name-drop academics though? Shouldn't you be dismissing them as irrelevant and praising Gates, Jobs and Gosford instead? You know, real INDUSTRY figures?
> find an example of FP--you can't.
Programming with functions... programming with functions... programming with functions. Sure.
Turing and Church were contemporaries. The Turing Machine was a landmark achievement useful for reasoning about the limits of computation, but where is it today? Have you tried building anything out of a Turing machine (assuming you had infinite tape)? It's basically Brainfuck. Church's Lambda Calculus predates it and is still useful.
John Backus, who can hardly be considered lesser than any of the others on this cherry-picked list, used his Turing Award lecture to rally for functional programming:
B. Expressive type systems, all the way to compile-time metaprogramming and dependently typed proofs.
Writing programs in style A. is tremendously valuable. Expending too much effort in the fine points of the type system, which invariably is simultaneously both under-expressive and over-expressive, is a complete waste of time. Some critical projects require high defect-free confidence, and for those it's legitimate to go full in formal proofs and take the 10x-100x productivity slowdown. For mere mortals, documenting the structure of the data (json) manipulated by the respective functions suffices.
Turing had about as much to do with real world programming as Alonzo Church so I think that’s equal if you really want to keep score. But there are other famous names in the history of computer science who did support functional programming, eg McCarthy or Backus. And there are “real world” programmers like John Carmack who advocate applying FP techniques to their programs.
But this is a stupid thing to argue about and not relevant to this thread.
I would say that it has a lot to do with CS. Basically there's two equivalent ways to look at computing: as Turing machines or lambda calculus. The first is far more popular as early machines and even our current day hardware is close to this. As machines have become more powerful we can do more useful work using formulation based on function composition. If anything this is a trend. What we don't yet have is common sense/knowledge on how to use it judiciously.
I'm writing this on a system with an FP package manager and anyone programming in any popular language these days will be exposed to a heavy dose of FP constructs and concepts, and as far as Knuth is concerned McIllroy famously burried his grand imperative efforts with a one line functional program 36 years ago. Having said that, your unkind remarks about academic programing language researchers do not strike me as unfounded in all cases.
> Lambda calculus is a BFD in the subfield of programming languages
This is a good example of abusing the vocabulary of mathematics as is common in the FP world. A programming language is not an object in abstract algebra. You can't add, subtract, or factorize a programming language. It's just software.
It seems that Rust just keeps getting more and more features.
Do people generally feel like the more features the better the language? I'm personally of the opinion that less is more.
Is this a pain point at the moment for Rust devs? Do you feel like the code you write is the same code 99% of other Rust developers would write to solve the same problem? Or is there actually a really large variety of styles?
I'm not sure you can only see this in the light of 'more features'. From a more fundamental perspective GATs just lift a restriction.
Rust already has generics and Rust already has associated types. Previously you needed to know/remember those features cannot interact for some reason, now they can. To me this is in a sense 'less' and not 'more'.
Came here to say this. Adding new syntaxes is something I have mixed feelings about at this stage, but opening up what you can do with existing ones (and, indeed, removing the complexity of remembering special-cases of what's not allowed) is a strict win in my book.
Even in Java, I'm rather sick of passing around types with `<X, Y, ?, ?>` with two honest-to-goodness generic types and two actually-chosen-by-the-implementation generic types. There's no good way in Java to hide those visible wildcards without making a wrapper class that only exposes the meaningful ones.
Associated types make the distinction far clearer. They better capture the qualitative distinction that "you can choose these, I get to choose those".
Also, associated types are inherently "functionally dependent" in the sense of Haskell's multi-parameter type classes. If you have a trait `F<X, Y>` with an associated type Z, you know that given X and Y, Z is fixed. Without associated types, `F<X, Y, Z>` could have multiple legal implementations for the same X, Y and different Z.
This is extremely meaningful in languages like Haskell and Rust which implicitly thread around the trait methods. Typeclasses in Haskell can be imagined as describing concrete dictionaries of functions over the given types. Languages like Java (manually) and Scala (automatically, using "implicit") reify these typeclasses as dictionaries that are threaded through functions as extra parameters. You can often define multiple implementations of a given trait for the same types, and you get to choose which implementation to pass along.
Rust and Haskell assert that traits have a single implementation for a given batch of types, and they automatically look up the correct implementation given the types you've specified. These "functional dependencies" mean that, in my example of `F<X, Y>` with associated type Z, it's sufficient to state what X and Y are to find the right implementation of F -- Z doesn't contribute to the lookup. If you don't have associated types, you could have multiple implementations that simply vary on Z, so you have to tell the compiler explicitly which one to use (by stating what Z is).
Because they don't. Your link even has an answer that highlights one way which they differ in expressivity - namely, that associated types don't appear in the instance head, so they can produce orphan instances where generics would not.
Consider the example of Rust's Iterator trait[0]. It has an associated type for the iteration item. It could have a generic type argument instead, but the two implementations would be different:
1. The associated type is a direct consequence of the instance head. That means that, for the type that Iterator is being implemented on, the associated type is known entirely from that type. If you have an iterable value, you know it only produces one kind of iterator item.
2. As a result, the associated-type version can only be implemented at most once. The generic version would be implementable any number of times, for any number of choices of type argument - and as a result, you would have to specify that you are iterating over an iterator with a particular choice of iterator item type, every time you iterate.
[0]: https://doc.rust-lang.org/std/iter/trait.Iterator.html
You can still do similarly hellish impls with `Index` - especially if you mix `Deref` into it. IIRC, the compiler will perform automatic recursive derefs when you try to use the indexing operator `[]`, until it finds a deref target which matches the index you're using. I doubt anything relies on that search behaviour outside of the compiler code itself.
I imagine abuse of specialisation could also increase the confusion, if you were so inclined.
> "The difference is that when using generics, as in Listing 19-13, we must annotate the types in each implementation; because we can also implement Iterator<String> for Counter or any other type, we could have multiple implementations of Iterator for Counter. In other words, when a trait has a generic parameter, it can be implemented for a type multiple times, changing the concrete types of the generic type parameters each time. When we use the next method on Counter, we would have to provide type annotations to indicate which implementation of Iterator we want to use."
Type parameters are decided by the use of the type. The associated types are decided by the implementation of the type. Type params are input, associate type are output.
Rust hasn't fundamentally changed all that much in the end, the only notable exception being `async`, and I feel like that's a pretty big success so far, being used far and wide from web servers to embedded task scheduling.
I do not currently feel like there are many different solutions to a problem - at least, not in the sense that the solutions only differ by style. Of course each problems have different solutions that are different points in the performance spectrum (static vs dynamic dispatch, etc...).
All the big features that are coming (GAT, const generics), all feel like lifting restrictions that will remove the need for hacks. To take const generics as an example, it will obsolete the need for `typenum` and `generic-array`, giving us a more consistent style. Less type-hacking is a good thing.
I do agree with the sentiment: Rust is already a big language, and has to be careful when weighting new language features to make sure they don't spend too much mental overhead. So far, I feel like they've done a good job at it.
I'd argue that for the most part Rust is not really gaining more features. Instead rough edges are being polished. GATs for example sound like a fancy feature, but in reality, it just allows you to make your associated types generic, just like everywhere else where you can define a type. A Rust beginner would expect this to work by default, and actively has to learn that it currently does not work. By enabling GATs new learners will have to learn less, not more. Most other upcoming features have been planned for a long time, such as const generics and allowing more things in const fn (which yet again, you currently have to learn the limitations, which are constantly being lifted). I'd expect the development of new features to rapidly slow down once those are finished. In fact they are already struggling to justify a Rust 2021 edition, as there's not a whole lot of features that would even require an edition.
As someone who uses Rust, "more features" is not a win. Most of the time it rarely impacts me - especially in Rust it seems that a "Box" here or there can fix code at the potential cost of performance.
But there are cases, often for libraries, where features can be quite helpful and make me faster. If a feature is basically "make code that should have already worked actually work", that's a huge win.
A lot of Rust features tend to be that. It's like "OK, we have 'impl trait', but it only works in some places. Let's let it work in more places." So,yeah, sure, that's a new feature - impl trait in new positions - but it's really just supporting code that many people would have expected to work.
I see this GAT feature similarly. It wouldn't be hard to "accidentally" try to have a generic associated type - in fact, I have probably run into this myself. And so GAT isn't really adding more complexity to me, it's just unlocking code that I would have already written.
Similarly, with GAT, we can unlock 'async' in traits. I already know 'async', I know it on functions and methods. So this is, again, just making an existing feature more consistent.
Yes, more features make a better language. I need those features to abstract my software so I can wrap my mind around all 15 million lines of code I have to deal with (that is both too many for one human to understand, and yet I know many will respond that they work on much larger system!).
We don't need new languages without features: it has already been proven that you only need exactly one feature in your programming language to write any program possible. Look up single instruction languages for proof that it is possible and such single features languages exist.
What we need are languages with many features that are easy to use to create complex programs that are still easy to understand.
C++ is a very powerful language, but the various features do not fit together well and so even experts tend not to know how to use everything and so the whole suffers. Most languages since then have attempted to take the [all or some subset of] the features of C++ to make a better language. (when any other language comes up with a good feature C++ copies it so is often getting credit for ideas that actually come from elsewhere - possibly better)
When someone is saying they want less features they mean one of the following: "I don't know how to use some feature and I've gotten this far so it must be useless". "That feature is useless for my domain so I don't want to pay the compromises required to allow it". "That feature is too often abused to write bad code and so it isn't worth having". "That features is neat but it makes the whole language ugly and so it isn't worth having". There is probably something I missed but you get the idea. Some of the above are logical fallacies, the others are compromises that apply to a particular problem. None of them are universal truths for problems.
Came here to say this. The last major language changing feature was async/await.
Everything else seems to be on the order of small polish to the language (trailing commas), standard library expansion, and in some cases language restrictions to make rust fit better with embedded systems.
In many ways, Java is seeing more major changes than rust is.
The main difference is that in rust if you don't use a feature you in general don't have to know about it , many features are intuitive(-ish) and due to rust safety guarantees you can often just "try" if your intuitive understanding works and as long as it doesn't involve unsafe you normally don't have any bad surprises.
In C++ this is less so the case. It's quite simple to write code which seems ok, currently happens to execute expected behaviour but triggers undefined behaviour.
Lack of monads is not a super huge pain point, but GATs do solve a big pain point: they're needed in order to have async methods.
To be clear here, you don’t need to write any of this stuff in the blog post to write an async method, but the desugaring of an async method involves GATs, so the language has to have them.
Lifetimes do not influence code generation (besides potential specialization on 'static, which I'm not sure if it's a thing).
But besides that the amount work is indeed very close by each other. Which is why there is currently no plan to first have lifetime only GATs stable as far as I know.
Because to have lifetime only GATs you need to change the syntax and liftime checker but once you checked the type
you can treat it ignoring lifetimes and in turn ignoring the GAT aspect.
But then if you did so you probably already implemented most/all of the code to typecheck non lifetime GATs and then doing the rest proper is probably not to hard either and might be not harder then doing liftime only GATs.
> You cannot specialize on lifetimes, it is not sound.
Good to know, given that specialization was stuck in nightly in the last years I didn't use it and in turn didn't look to much into it.
> Is this a pain point at the moment for Rust devs?
No, in rust as long as you don't use a feature you don't need to know about it.
> Or is there actually a really large variety of styles?
There are official(-ish?) style guidelines. So while there are a variety of styles most code uses mostly the same style.
> I'm personally of the opinion that less is more.
Sure, but you normally want to have a "complete well rounded language".
For this in rust you need GAT, but you don't need HKT, Functors, Monades or similar.
I hope there will never be a Monade trait in std, it would add a lot of mostly hardly useful complexity we really don't need and would mostly be used for IMHO mostly useless over abstraction.
But GAT are really needed. E.g. to properly work with certain async use cases (through for this you only need GAT for lifetimes, not arbitrary type. In my experience GAT limited to lifetimes cover 80+% of the cases where you really really need GAT).
> No, in rust as long as you don't use a feature you don't need to know about it.
I don't think that's a particularly good argument, since it's true for all languages. Even if you don't use a feature, you will use and read code written by others all the time and understanding that code can be pretty important.
I agree with you. Spend enough time in an ecosystem and you'll eventually find a need to interact with most of its features.
My experience in Rust has been pretty positive. I'm writing a new book on systems programming with Rust and, as a result, have spent a great deal of time digging into the internals of libraries, the compiler. The compiler is, by far, the most surprising because it's allowed to use nightly features in new and interesting ways. Everything else, when I encounter something new to me, I'm able to understand from the Rust documentation. The key differentiator with C++ is, I think, the focus on documentation and ensuring that new features are explainable in a simplistic way. This helps make new features introduced into the language jive, to my eye.
That said, there are areas of the ecosystem _outside_ the language that are hard to keep up with. The future notion in Rust used to be like that, before Future was included in the base language. That's tricky but, again, I think Rust strikes a good balance here: conservative about content in the base language, enthusiastic experimentation in the ecosystem. It's possible that this'll break down some day but it hasn't yet and I don't see it as happening soon.
The difference is that with rusts safety you can to some degree "try thinks out". Which doesn't work with C++ as you might have hidden UB.
Also C++ has a bunch of "hidden" features and unexpected interactions with other features and UB like e.g. forward guarantee because of which `while(1);` is UB.
GATs are removing some restrictions on the existing facilities of traits, and less creating totally new features. To be clear figuring out the subtleties of exactly how this is verified and edge cases with the type system is non-trivial, but as a developer this extends the set of types that “just work” as associated types. Crucially, it allows avoiding some weird hacks to get the borrow checker happy with non-static lifetime associated types in many situations.
> They're all individually great pieces of software but none of them feel complete and I always reach out for something that is not there.
I'm not convinced that the libs being in std would help you with this issue. Instead you'd probably have the same incompleteness with a whole bunch of the functionality only available in nightly rust because of the stability contract implied by std.
Picking either tokio or async-std (flip a coin if you can't decide) and using their recommended way doing a given async task isn't really all that different from doing the same with a hypothetical std::async.
Is there any work on creating feature-based sub-standard libraries that are aggregations of popular (and robust) crates but tying all of their versions together?
For example, I'd like to put in my Cargo.toml:
sstd = { version = "1.0", features = ["clap", "serde", "rand"] }
Where the sstd crate essentially has a whole mess of crates bundled together ensuring only one version is used for each individual dependency?
Perhaps this is a question best asked elsewhere - your comment just brought it once again to my mind.
I mean, I've run into situations where GATs would have been a good solution before and had to find a clumsier workaround using boxing or other solutions until now.
So I don't really know what to say to your initial question, except "this feature is useful", and that the question (and especially your answer to it) may be a bit overly reductionist.
The reason Rust has so many features is to manage the complexity lifetimes introduce in a safe, zero-cost way. The big recent additions and outstanding work is trying to fill gaps there.
* async - you can write futures without async, but async lets you do more because of how it resolves lifetime issues you'd normally get when using combinators. The other route without async would be to manually write unsafe state machines. Yuck.
* GATs - these are required to be able to have native support for async methods in traits (interfaces) without needing to box the return value (ie. have statically dispatched ... not sure what the term is, but it lets it be online too, IIRC; I might be wrong about the performance ipact, but ergonomic pain is real)
* const generics - this gives better support for stuff like arrays, since really what it's doing is stuff like supporting array length as a generic type parameter instead of a special case.
I would say, with the exception of the async split, no. The biggest pain is still the learning/skill curve, it always feels (to me) like there's a language feature I'm not taking advantage of that would make a piece of code more "rusty". Not different in style necessarily, just more elegant.
I'm working though a Rust course right now, and while my code works fine (once it compiles), I always see the reference implementations of the code, and it's like 2 lines of filter/map/collect, and voila. Meanwhile, my 8 line frankenfunction looks like the Charlie Brown Christmas tree.
Using a loop is not a sin, but it is often counterproductively over-prescriptive. A loop says, 'do these things in this order in a single thread'. If the loop body is an effect-free function, then the order doesn't matter, why not use a map to let the compiler or run-time decide how many cores/containers/botnets to throw at the problem? Similarly, if the loop combines the results of its iterations in a unital and associative way, why not use a fold/reduce in order to get as much parallelism as possible for free? Sure, your compiler could try to do some fancy static analysis to try to figure out whether the loop you've written is equivalent to a more efficient program, and if so, replace it for you, but that's a lot of work for the compiler writer and inherently limited: we're a long way away from compilers being able to guess properties of programs and synthesize proofs for them. Sometimes avoiding loops is both conceptually clearer and practically more efficient.
> why not use a fold/reduce in order to get as much parallelism as possible for free?
I always wondered how this would then be configured in a convenient way. I mean, there are situations where you can not just let the compiler parallelize to the max (e.g. web services).
99% of the time a loop works just fine, because there are no measurable gains to be had from parallelism. For the 1% where performance matters, it's usually a bit more involved that simply using a map or fold, and hopefully already packaged as an off-the-shelf library. To have measurable gains from parallelism one has to be very intentional in balancing communication vs computation. Think carefully designed libraries like cuDNN.
I have decided to learn Rust to replace C (for me) instead of Zig, which I like, but Rust has more support for my learning in terms of community and documentation. I plan to return to Zig when it matures a bit more. I get my dose of FP when I program in J or APL. I use them for very mathy and fun things, and I don't even think about FP; they are FP. I am working my way through an 11-page paper that implements a CNN in APL in 10 functions that amounts to 10 lines of APL, 1 function per line [1]. Amazing and simpler than my stabs at Haskell and Idris or Scala and F#.
Rust is indeed starting to enter the territory where features are added on top of features as a way to deal with that there were too many restrictions at the start.
It is starting to become a pain to me, but I'm glad that apparently there is some semblance of h.k.t.'s now, which was very much wanted.
I used to have this position as well, writing everything in either assembly or pure lambda calculus, to avoid all these pesky higher level language features.
As has been noted in other forums, this article does not cover implementing Functor/Monad for the Option type family (for example) faithfully.
What's missing is implementing Functor for `Option` (the "unapplied" option, without a type parameter), in such a way that it is evident in from the trait implementation that if you pass an Option to `fmap`, then you get an Option out (not just any Functor). In other words, that the particular flavour of functor or monad is preserved in these operations.
Proof of concept code for this exists, but Rust doesn't support it in a fluent way, even with the GAT feature.
> What's missing is implementing Functor for `Option` (the "unapplied" option, without a type parameter), in such a way that it is evident in from the trait implementation that if you pass an Option to `fmap`, then you get an Option out (not just any Functor).
I was momentarily confused by this explanation, so allow me to distill the problem.
In the trait declaration for Functor, nothing requires that `<X<T> as Functor>::Wrapped<T> == X<T>`. In other words, the implementing type can be different from the return type of `map`. You would want to return `Self<T>` from `map`, but `Self` refers to the fully-reified type, which is to say the implementor has already decided what the type parameter (if any) is, and you as the trait author have no control over it.
You need some way to force the above equality, which is probably what the referenced "proof of concept" does. (I think I found it here: https://github.com/edmundsmith/type-plugs)
>> Interestingly, we have lost the knowledge here that Self::Wrapped<T> is also a Pointed. That's going to be a recurring theme for the next few traits.
Adding that restriction would make it impossible to implement Functor for lazy collections (for example, `<I as Iterator>::map()` returns a `Map<I, F>` where F is usually some unmentionable closure type).
The lazy collections are an interesting case, because as you build up a stack of transformations, the type itself grows in ways beyond simple A -> B type substitutions. That's a really good example of a case better served by GATs than HKTs -- they're not that kind of Functor to begin with.
To be explicit, a lazy collection type in Rust is more than just `List<T>` -- it needs to carry an additional type describing the set of transformations being applied. You'd really have something closer to `List<F, T>`, and as you apply transformations, F grows while T is substituted.
(You could avoid this by storing a trait object for your transformation stack instead of parametrizing over F, but half the point is that you can often flatten the whole stack down into highly-efficient generated code, and abstracting over the precise F hinders that flattenability.)
Even though I see that these language capabilities remove restrictions I hope that HKTs and with them Functors, Monads, etc. will never make it into Rust.
The thing I like most about Rust is that it is still a practical language where I can still solve my problems in different ways and always know what will happen. I also care about the "how" and not only about the "what".
Everything being F[_]ed is what I do not need anymore.
Functors and Monads already exist in Rust, you just can't talk about them in the trait system. Option, Result, and Future are all both Functors (map) and Monads (and_then). You've probably used them without even knowing, because we didn't give it some weird, unlearnable category theory name.
If someone were to create an explicit trait for them and RFC it, they'd probably name it something like Map and Then rather than Functor and Monad.
If someone asked me if I've ever used functors or monads in Rust, I would unequivocally say "no, I haven't." And I would say that because I've never written code that's generic over functors or monads.
I think this kind of "you've already used functors or monads, they are just scary names" retort is really missing the point of what folks are complaining about.
Right, but I can think of a few cases where one might want to be generic over effect systems; especially in library code. For example, you could have a parsing library that accepts both blocking and non-blocking I/O streams. You need a Map/Then trait in order to express that generically.
I didn't say there weren't any use cases. Of course there are use cases.
I'm not even taking a side here in this thread (although I have advocated against functors/monads in the past). I'm saying that your comment is missing the point of folks who are skeptical of things like functors and monads.
> Option, Result, and Future are all both Functors (map) and Monads (and_then).
No. They are Option and Result and Future.
> You've probably used them without even knowing, because we didn't give it some weird, unlearnable category theory name.
No. I have not used them without even knowing. I have used Option, Result and Future. I do not need some meta-universe which just makes easy things more complicated by stating some laws which types must hold just for the sake of discussing them and have the one ring to rule them all.
It could be argued that understanding the commonality of certain aspects of Option, Result, and Future results in a simpler model rather than a more complicated one.
Don't you find it intriguing and interesting that these seemingly different types have these commonalities?
Well, you somehow got me. When I mentioned the "everything F[_]ed" in my top post I thought I'd made it clear that I knew the commonalities.
And I have to admit that I find these commonalities very fascinating. I can still remember how I enjoyed applicatives and the like when I started with scalaz back then. Whether the model becomes simpler? I do not really know. I just found out for myself that I do not need this knowledge at work and that it didn't really help me to solve my daily problems. This is actually what drove me to Rust which in my perception is a nice practical "in between" (I know it also is no silver bullet).
You will probably not see much Haskell style functional programming in Rust simply because there is no garbage collector in Rust, and Haskell depends on one very much.
It isn't immediately obvious that FP requires GC. Sure, the languages we have right now make use of GC, but it is far from certain that there is no way for a subset of FP features to exist in a language without GC.
The F in FP definitely hugely benefits from GC. Closures in Rust don't compare to Haskell.
Combine that with GHC's optimizer and it's no contest which language to choose if both ergonomics and performance of code written largely as lambdas is your priority.
The core issue here isn't exactly GC, though it is related. The issue is that abstraction is about hiding things, and Rust's type machinery is about exposing things. In Haskell, you have lambdas, and you have types of all of the same size data, because it's all on the heap. In Rust, each closure is a distinct type, plus the three different types of closures, plus types of different sizes...
It's not so much the GC, as it is "in a GC'd language, everything is on the heap and a lot of information (from Rust's perspective) is erased." You may be able to get rid of GC in some sense, but you'd end up with something halfway to Rust, not the whole way.
No problem. I mean, in some sense, it's also about abstracting things, it's just that Rust cares about a lot more details than other languages, and those get reflected in type signatures.
Hm ATS is prior art re: linear types and closures. It's definitely not the same.
That said, -XLinearTypes in GHC 9.x will open up a world of memory management capabilities in library-space for Haskell. Very exciting stuff! If you push the heavy stuff off-heap, then the GC just becomes a slightly fancier arena allocator.
I'm very interested in that. I was using zig for a while, and I wish I could build my own (simple) memory allocators for certain things in Haskell like that. I have a project where I want to be able to directly control the memory layout of the data structures but since that's just a tiny part of the whole program I don't want to build the whole thing in zig or something like C, I prefer Haskell for this specific purpose (it's a stream processing application with a small [code-wise, the data set can be huge] in-memory tree database which I want to be able to do the memory object allocation for myself).
Yeah you can do that. I'm actually working on a little library to do just that (control memory layout) although it's more focused on 0-copy FFI w/hsc2hs
You can also have the RTS manage the memory itself, but have complete access to a pointer to raw memory. That doesn't work if you have pointers in the raw memory ofc, but it is a nice option if that's not the case.
If you don't mind, can you send me your repos if it's public work? I'm interested in 0-copy stuff, too, the database I'm doing can benefit from 0-copy (again, a small part of the codebase but a big part of the technical challenge), and I'm seeing that linear types help a lot with that kind of thing from the reading I've done lately. I'm mostly a tinkerer and like to see what's the newest cool stuff people are making. My email is in my profile.
Unfortunately we don't yet have covariant associated types, so my GC is a pain to use.
In all seriousness, I'm not sure if Rust will ever be a great fit for abstract high level programing. The syntax is just too verbose, and lifetimes are a leaky abstraction. It might make a good compiler target though. At the very least next generation FP needs to have a great Rust interop story.
Does anybody know if GATs would enable to issue indexes/handles tied to a container without the use of closures? This would allow compile-time checks that indexes are only used with their container.
The most advanced crate for such use-case is `indexing` [0] but it is pretty complex inside, requires closures and seems more like an experiment.
In principle, the ingredient you need for that use case is "existential types". I suspect GATs let you define existential types -- after all, the selling point is that the trait defines some type, and you can't rely on what type any particular implementor chooses. (But I can't quite put my finger on why non-G associated types shouldn't be enough.)
In Java(!), I have a class `Database<$Database>` whose constructor is private and whose sole static factory returns `Database<?>`. Thus, the caller must assume that every instance they create is parameterized over a distinct unknown type -- even if, in the implementation, $Database = Object. The dollar-sign prefix is a hint to the reader that this type parameter is meant to be a type correlated with a unique value.
You can get an `Index<$Database>` from any database, and it's parametrized over the owning database's unique existential type. Indexes over two different databases can't be confused.
The only issue (so far) is that you have to be careful not to "forget" the correlation between the type parameters on an index and its owning database. If the type parameter decays back into a wildcard, the correlation is lost.
I know some people are worried about this, but the true power of GATs is to allow you to add lifetimes to associated types. This leads to a whole avenue of new optimizations and efficient representations.
It’s not about making things “more like Haskell”, that just happens to be a consequence of making more low-level operations representable in safe rust code, which if you think about it is the way rust has operated from the very beginning.
I like that the article starts with describing Functor as "something almost every developer knows about". That might be a bit of a stretch. I would say almost all developers has interacted with what could be called a functor, but knowing the concept is something different. In certain areas it is starting to get well known though.
I agree, though I think developers should be aware of type-parameters, and should be aware that if A is a subtype of B then List[A] should be considered a subtype of List[B] (which turns List into a functor on types by inclusion). In fact I reckon almost every developer is aware of these things but perhaps doesn't realise the underlying pattern.
> if A is a subtype of B then List[A] should be considered a subtype of List[B]
Well, careful -- I'm sure you know this by your name, but the naive treatment of Java's native array type as A[] <: B[] when A <: B leads to well-known soundness issues.
Fundamentally that's because you can both read and write to Java's arrays; if you assume arrays are read-only then A[] <: B[] is indeed sound.
LearnYouAHaskell bought the book last year but because of lack of time didn't worked through it.
I should dust the book off in januari and give a read.
Many Haskell and functional Scala books will give you a good understanding of type-driven development, but I'd recommend "Functional Scala" by Chisano/Bjarnson (and do the exercises).
Alternately, "Learn You a Haskell" will provide a decent base.
Either of these will give you the base needed to explore higher-level concepts that can easily be used in Haskell, Scala, Purescript, OCaml, a few statically-typed functional languages, and to some extent, Rust.
Seconded, even if you don't later use Haskell, it's a great language to learn if you want to see some interesting programming concepts that are tried and true but not well known. You could also try something like Purescript if you want your code to compile to CommonJS and could try Idris 2 if you also want to try out linear types, which is cool.
In their defense, this looks like content marketing for people who buy functional programming consultancy services. So they're not really writing for a general audience. I tend to agree on initializations though. I think a lot of times people use them as a signal of their membership of the technical "in group", to establish a place in a social hierarchy. That's not good communication.
Thank you. The article doesn’t unpack the initialism, and the author just links to a Rust Playpen (or whatever) site which supposedly uses this three-word thing.
I was going to object, but in doing my own research I found that Haskell ranked even below Dart in the SO Developer Survey for languages in use - and I certainly wouldn't consider Dart popular, since I've never seen, let alone met, anyone using it.
That said, "popular" is a very contextual term. Within certain classes of programmers (and I don't mean the obvious tautological one) Haskell is extremely popular, and fit for purpose. I assume the same is true for Dart.
And I would add that, in those circles, the introduction of advanced HKT features like RankNTypes has been extremely successful.
template template parameters and nested template classes work as HKT in C++ in practice (and are extensively used for exactly that purpose), and I think C++ counts as a popular language.
I was surprised that to learn that HKT weren't part of Scala from the beginning and had to look that up. You're a bit wrong about when that happened: it was Scala 2.5 in 2007, not 2.8 which was released in 2010.
I want a language designed by one very smart person, a dictator. I'm looking forward to Jai but for now the best native language is still C++. It also has too many features like Rust but at least it works.
This confirms my concerns.. Rust is becoming a Scala language, too much features.. Rust should have been as simple as C, i wonder if there will be some compiler switchs to ban certain features, and crates that are tagged to work with certain features, so at least things will be easier to deal with
"Using" a programming language is 99% reading and understanding code and 1% writing code. I only control the 1% i write and not the 99% I read. You can't know everything but you need to know enough to understand most of the code you read. If a feature becomes popular among others, then you need to understand it, even if you never use it yourself.
It's never this simple. You have to decide what subset to use, understand how that subset could interact with the avoided language features, have a strong team culture that can agree on the subset and have the discipline to commit, understand how new language features affect that commitment, etc. In addition to needing to read and interact with other libraries that probably use it anyway.
The team aspect alone is a deal breaker for me--when you have a technology that attracts people who are intrinsically interested in the technology, good luck trying to get them to avoid certain parts of it. That's a losing battle.
You can write custom pints if you're willing to build a custom toolchain (shouldn't be too terrible to rebuild it once a release)
That said, GATs fix a real pain point not actually shown in the article. And that's for impl return types in trait methods. You can have return impl types in normal functions, and even associated methods, just not in trait methods. I think the term for this is existential types. And they occur a lot with async. So async in traits is a real pain point this feature fixes.
It sounds silly but its true.. as you increase the complexity of the language, the hardcore users making the libraries adopt those features, which means users have to adopt those features as well. Pretty soon everyone is talking about monad transformers and HKTs and it takes 4 hours to write something that would take 15 minutes in Python.
Then the wave of FP consultancies and trainings show up so they can bill you $10000s on workshops and conferences to teach your devs monads.