Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.34.0 (rust-lang.org)
389 points by steveklabnik on April 11, 2019 | hide | past | favorite | 144 comments



The history of TryFrom/TryInto has spanned 3 years, from when it was originally proposed as an RFC in 2016. For a seemingly simple API, it's gone through a lot. Especially unusual was that it was stabilized a few releases ago and then had to be destabilized when a last-minute issue was discovered with the never type (`!`). The never type had been the primary blocker for stabilizing these APIs for the last year or so, but it was finally decided to simply use this temporary `Infallible` type, which would be mostly forwards compatible with the never type itself.

I've followed the issue closely because it's one of the features used in Ruma, my Matrix homeserver and libraries. In fact, for the library components of the project, it was the last unstable feature. With the stabilization of these APIs, I'll finally be able to release versions of the libraries that work on stable Rust. This will happen later today!


Here's a funny quote about that...

>> Can you eli5 why TryFrom and TryInto matters, and why it’s been stuck for so long ? (the RFC seems to be 3 years old)

> If you stabilise Try{From,Into}, you also want implementations of the types in std. So you want things like impl TryFrom for u16. But that requires an error type, and that was (I believe) the problem.

> u8 to u16 cannot fail, so you want the error type to be !. Except using ! as a type isn’t stable yet. So use a placeholder enum! But that means that once ! is stabilised, we’ve got this Infallible type kicking around that is redundant. So change it? But that would be breaking. So make the two isomorphic? Woah, woah, hold on there, this is starting to get crazy…

> new person bursts into the room “Hey, should ! automatically implement all traits, or not?”

> “Yes!” “No!” “Yes, and so should all variant-less enums!”

> Everyone in the room is shouting, and the curtains spontaneously catching fire. In the corner, the person who proposed Try{From,Into} sits, sobbing. It was supposed to all be so simple… but this damn ! thing is just ruining everything.

> … That’s not what happened, but it’s more entertaining than just saying “many people were unsure exactly what to do about the ! situation, which turned out to be more complicated than expected”.

https://this-week-in-rust.org/blog/2019/03/05/this-week-in-r... https://www.reddit.com/r/rust/comments/avbkts/this_week_in_r...


> new person bursts into the room “Hey, should ! automatically implement all traits, or not?”

Lol; that was me (although there were probably others). I approve of the dramatic rendition.

https://github.com/rust-lang/rfcs/issues/2619


And a funny addendum for context (from the same r/rust thread as the above quote):

> The never type [is] for computations that don't resolve to a value. It's named after its stabilization date.


> so you want the error type to be !.

For others having trouble to grok that sentence, ! is the never type. Should never happen. https://doc.rust-lang.org/std/primitive.never.html


Hmm, fn before_exec strikes me as a thing that should be gotten rid of entirely in favor of unsafe fn before_exec if the former indeed turned out to actually have the potential to cause undefined behavior. But that'd probably be a breaking change which would require a major version number bump, so deprecation is absolutely the right thing IMO. And it also sets the stage to get rid of it outright come the next major version bump.

This is the first such instance of "oops, turns out that wasn't safe after all, that should have been unsafe" that I've heard of in Rust. Is that because this is the first such mistake since the 1.0 milestone (the rest of `unsafe` having been nailed down before 1.0), or have there been other such mistakes that I didn't hear about because I haven't read the notes for all of the prior releases of Rust?


We can’t get rid of it because we have a commitment to not breaking users’ code. There will not be a Rust 2.0.

There have been some small soundness holes in the typesystem that we have fixed, resulting in breakage, but since that’s in the language, that’s the only way. A library API is different. There haven’t been many of these though. We did have some point releases which immediately fixed some library errors that were introduced by a release, see 1.15.1. But that was in a new API, and so the chance of breakage was extremely low. This API has been around for years. It is also not used much, given it’s a *NIX specific extension you have to explicitly import.

Note that upon using it, you’ll get a warning, so everyone will at least be notified.


>> We can’t get rid of it because we have a commitment to not breaking users’ code.

Totally the correct way to go about it, IMO. Thus my comment about deprecation.

>> There will not be a Rust 2.0.

>> Note that upon using it, you’ll get a warning, so everyone will at least be notified.

:/ That I've mixed feelings about. Why not? Is it because of the fiascos between Perl 5/6 and Python 2/3? Deprecations and warnings about using deprecated features are indeed the correct thing to do, but IMO, aren't the whole picture. I think to all the times I went to compile something, and got to watch a constant stream of warnings for it (or worse, got hit by bugs I shouldn't have because the programmer took time to finally suppress that annoying warning pointing to the flaw in his code). A (very occasional) major version bump to show "These things we found out were actually wrong and said that you shouldn't do N years ago? We meant that, those are gone now," strikes me as the Right Thing for a language with a major focus on correctness, like Rust.

But, since I'm here armchair quarterbacking here on HN, instead of getting my hands dirty building a major language, I'm willing to concede that my opinion might be different were it informed by experience.


It’s partially that, it’s partially that systems people are very conservative, and it’s partially Rust’s own history before 1.0. People do view major breaking changes very differently these days due to those specific situations, and some people still think Rust changes daily. Systems languages tend to have a stability timeline of “forever.”

Rust does have a strong commitment to safety, but not an absolute commitment to correctness. These things happen very infrequently. Is it really worth taking an extreme action (which a major language version bump is, especially in the systems space) just to turn a few warnings into errors? Currently, we don’t think so. Maybe in 20 years, when (And if! :) ) there is more than one standard library API that suffers this problem, it would be worth it, but at the current time, it just doesn’t seem to make sense.


I personally feel like this falls clearly into the soundness hole catagory and should be allowed to be removed in a breaking way. I understand the desire to avoid breaking changes, but soundness issues like this chip away at what "safety" in Rust really means.

I think I'd expect to see warnings about the existing safe method being deprecated for a while, then it's complete removal at the time of another edition of Rust. Users of older versions of Rust can still use the removed function, but newer editions ban it.

This could be implemented as a reserved STD function list in the compiler if it must, since as you've mentioned we only have one STD lib atm.


Yes, as I said below, this is technically a thing we’re allowed to remove. Maybe someday we will. But we want to be careful.

Editions cannot make this kind of change, as also discussed below.


I think it could be an edition thing though... just say Rust 2019 will not allow `Command::before_exec`. By turning on edition 2019 you've accepted the new brakages that it imposes on you, and this is one of them.

Never mind the fact that the function is still a symbol in the STD library.

I understand if this is too much complier magic, though it seems like the correct idea to me anyway.

Am I missing something?

P.S. symbol version pinning seems like a great idea, I've even dreamt of having exposed syntax for it, e.g. `Foo::<i32>::bar@v1.3.2(arg1, arg2)`.


Please see the discussion below; I already discussed this at length :)


Just so I'm clear, are you saying then that it's not possible or desireble to have Rust 201X code link against Rust 201Y STD? Or would that work?

Sorry if I'm not making my thought very clear.


It is not possible to have the standard library be different for different editions; it must be the same in all of them. There’s no #[cfg(edition=“”)] construct.


Can't there be a # [deprecate(untiledition=2018)] to solve this? The standard library contains all the old stuff but it becomes inaccessible in all editions after 2018.


There can not.


Gotcha, thanks for taking the time to explain.

I hope I didn't come off as being to negative about the change. Warnings are still a pretty good solution here.


It’s all good!


Would it be reasonable to introduce a compiler flag that is enabled by default to mark this category of warnings as errors, or would even that need a major version bump?


It’s not really feasible. Maybe in an extreme case, but this just isn’t one. We have done this for fixing language soundness holes, but it is extremely rare. The spirit of the law matters as much, if not more, than the letter.



It’s a policy question, not a technical question. We can absolutely do it technically.


  We can’t get rid of it because we have a commitment to not breaking users’ code. There will not be
  a Rust 2.0.
This is interesting. The C Standard people have, for example, removed “gets” from the C11 standard. Go, too, has exceptions to its Go 1 Compatibility Promise[1], which include security, unspecified behaviour, and bugs, both in the language and in the standard library. I don't remember for sure, but I think there were things removed from C++ specifications as well; please correct me if I am wrong. Did Rust decided to never break stdlib compatibility because the mechanism of editions was considered from the start, or was this rather ad-hoc?

[1]: https://tip.golang.org/doc/go1compat#expectations


We do reserve the right to make changes for soundness holes. In theory, you could argue that this is a soundness hole, and so we can remove it.

We have to do such things extremely judiciously though. Users can only tolerate so much breakage, even if you say “we did say we reserved the right to do this.” This API is just so rarely used that it was judged not a good time to pull the “technically we are allowed to do this” card.


C, C++, Common Lisp, and others have the benefits of an ANSI standard. Any compiler that says it supports the standard, supports it, regardless of compiler version or breaking changes in future standards. My C89 code will continue to work as long as a compiler supports that standard. Worse comes to worst, I can create my own implementation.

When you don't have a standard, you're left at the mercy of languages promising not to break stuff. If they do anyway, your options aren't as clean as just sticking to the older standard.


C++ has changed some very old or rarely used things, like `auto`. It's very rare though.


> There will not be a Rust 2.0.

Isn’t Rust 2018 effectively Rust 2.0 (since it introduced new keywords and requires changes to old code if a user opts in to Rust 2018)? So the function could be made unsafe in Rust 2021 (or whatever it’s called).


No, because it is opt in. All existing code keeps compiling as-is.

The standard library cannot change with editions for exactly this reason. It’s compiled with a particular edition, like any other crate, and so can’t work differently in different editions.


I still feel that there is more to language compatibility than "Can rustc run it with the right setting". Rust 2018 is a new version, the fact that it's not called 2.0 is just semantics. And I still believe "version interopability" would have been a better term than misusing "backwards compatibility".


All existing code continues to compile as-is. That’s the definition of compatibility.


Can I copy a module from an old project into a new crate and it will always work without adjustments?

What about code examples from StackOverflow, or Github issues? Will these work without adjustments in my local project?

Will a future syntax highlighter always be able to highlight the code in my old Rust articles?

Will future tooling always be able to read my current code?

Can I make editor macros and snippets that will keep working in every version of Rust?

If I can write code in one crate and it will compile, but then write it again in another, newer crate and it doesn't, then it isn't backwards compatible. Plus, as noted above, "code will keep compiling" isn't even a guarantee that the language team gives you.


1. Depends on the new project, of course. Just like any language.

2. Same thing. You can construct an example that breaks any language in existence.

3. Yes.

4. Yes.

5. Yes.

6. You’re using an idiosyncratic definition, so yes, it’s not the same. Backwards compatibility is about the same thing continuing to compile, not about changing things and expecting it to still compile; that’s forwards compatibility.


Once again something is at odds here. You give resounding "Yes" comments, but in the backwards compatibility discussions I had with language team members, those "Yes" comments were actually "not really our problem" answers.

I also don't know how current code running in a future compiler is forward-compatibility. And the Rust guarantee only kind-of holds if you define "code" as "crate". Everything outside of or crossing that boundary is not compatible.

To clarify and bring it to an example: Are you guaranteeing that `(a<b, c>(d))` will always be parsed as a tuple with two comparisons? Because if so I believe you're the only one.


When he says "version" he means "version". Are you taking "version" to mean "edition"? No matter how many version upgrades the rust compiler gets it will still compile 2015 edition (and 2018 edition) code.


An edition is just another word for version. If the language were truly backwards compatible, you wouldn't have to tell tools like `rustc` which version/edition your code is written in.


You’re conflating Cargo, Rust (the language), and std/core libraries, and rustc (the compiler). This is one of the difficult parts of Rust and when people say “there won’t be a Rust 2.0” they don’t clarify which part they’re talking about.

Given the tight relationship of all of these parts (despite each one being capable of being individually versioned), it would seem feasible to add a cfg attribute for the Rust edition, and then make the function unsafe if the user had opted in to Rust 2021 (or whatever).


I am not conflating anything. The compilation model of the language does not permit such a thing, independently of implementations of any of those pieces. This is a crucial aspect of the design of the language and the edition system.

Think of it this way: crate A uses edition 2015. Crate B uses edition 2018. There’s only one copy of the standard library. It can’t be compiled both ways.

If, in theory, we let you have multiple copies of the standard library, maybe that could work, but that’s not possible nor desirable for a host of reasons.


With all due respect, I do think you're conflating the way things are with the way things have to be.

glibc has a similar commitment to never breaking user code, and yet symbols have been removed from glibc! The trick is to use "symbol versioning"; code compiled against old versions of glibc is transparently rewritten to use the old versions of symbols, while code compiled against new versions is rewritten to use the new versions of the symbols. [0][1]

You can imagine something similar for the Rust standard library, whereby both the unsafe and safe versions are provided, and packages are transparently rewritten to use the correct version based on their specified edition. You don't need two copies of the stdlib; just of the symbols that have diverged. In this case, you wouldn't even need two versions of the symbol in the resulting binary, since the addition of `unsafe` doesn't change the codegen of the function, but merely restricts what code is accepted by the compiler.

It's true that this would require some serious shenanigans in the stdlib, but it's a tractable problem. It's also, in my mind, quite acceptable for the standard library to play games like this, since the compiler, the language, and the stdlib are always going to be tightly coupled, and the complexity can be shielded from users.

All that said, it sounds like the list of deprecated stdlib features is nowhere near long enough to justify adding this kind of complexity.

[0]: https://gcc.gnu.org/wiki/SymbolVersioning

[1]: Note that symbol versioning is kind of a quagmire, but that's because it's seriously underdocumented and practically no developers are aware that it exists. The idea itself is sound.


Maybe we’re just speaking at odds here. The current design of the language does not allow this. A different design may make it possible, but that’s a completely different question.

I think what you’re saying is “it is possible to do this with the correct design” and what I’m saying is “that design has not been proposed nor accepted and so the answer today is “that’s not possible”.” Does that sound about right?


Yes, absolutely! (Although note that I'm not the OP.)


Cool. To be clear, I think that it’s maybe not even possible with symbol versioning, because you need more information than that. The compiler has to know “is this function unsafe or not”, for example, not just “does this symbol exist.” I’m not an expert at symbol versioning, so maybe there’s a way to do this I’m not aware of. There’s also other stuff you’d need, too, I believe... but the real point is, you cannot do it today, and that’s really all I’m trying to say. :)


Yep, definitely. In this specific case of a missing unsafe marker you don’t need symbol versioning at all; you need the stdlib to somehow present two signatures for the same function that the compiler can select between according to the Rust version. It’s only for more complicated cases of API breakage (your function took an i32 but you need to extend it to an i64) that symbol versioning becomes necessary.

And great! All I wanted to point out was that, should the need arise, it is technically possible to tie stdlib function signatures to a particular edition.


Currently when you have crate A with a deprecated function there is no warning until crate B tries to actually use that deprecated function, so clearly there is some place that crate B can look at to see whether a particular method is deprecated, there is even place for a note field that will get printed when crate B tries to access it.

Why would it not be possible to have some additional information that says "from rust edition 2021 onwards this is actually obsolete". Whether crate B merely gives a warning on compilation or an error then depends on crate B's edition.

Sure that means you can still use the method if you really want to (by using older editions), and it also means that if some of your dependencies are still on an older edition you can't prevent it from being used by them, but it would be a higher hurdle to accidental usage.


> Why would it not be possible

There may be a way in which it is possible, but it is not currently possible. We’d have to have a different design.


> We can’t get rid of it because we have a commitment to not breaking users’ code.

This is not really true though. The commitment is about publicly visible and crater-testable code. If you have your own code in-house, there isn't much guarantee unless all syntax/code you use is widely used in public as well. A known change in language grammar will be evaluated on its publicly visible impact, not on its general breakiness.

It's really disappointing to keep reading about guaranteed backwards-compatibility and then being told in RFC discussions that those don't exist.


Cf. `try` becoming a keyword, when there are hundreds of calls to a method named `try` in Servo, one of the largest Rust projects in existence.


However, none of that affects a crate until it opts into the new syntax. So everything keeps building just fine.


That’s not true. We have rules. some of those rules allow us to make changes based on that kind of thing, but it’s still done relatively little.


I've had many discussions and much pain about this topic. Can you tell me what I wrote that was specifically wrong?

If backwards compatibility is guaranteed, turbofish cannot be "fixed". I can't find the link currently, but from my reddit post on this discussion the crucial quote from a language team member was:

> Our bar for doing backwards compatibility breaks has never been soundness fixes. We have in the past done changes given future-compatibility warning with lints and then made such changes without an edition.

The turbofish discussion also contains this quote:

> This RFC technically amounts to a backwards incompatible change without using the edition mechanism. However, as the RFC notes, a crater run was made and the syntax was not encountered at all.


Yes, there are some lang team members that want to change this policy, but it is not the current policy. Those changes were for soundness fixes.

You can see how controversial that RFC was, and for that exact reason.


No, that wasn't for soundness fixes. This is from the discussion surrounding syntax changes like turbofish or chained if-let bindings.

Can you point me towards an authoritative post in the turbofish discussion that says that it can't happen because of backwards-compatibility changes? Because before it was locked it seemed that the language team wanted to go ahead.

I've been repeatedly told by language team members that the policy you're promising people doesn't exist. I would really appreciate it if there was further clarification.


Those things didn’t happen yet. I’m talking about the ones that did happen.

My understanding is that, again, some lang team members want a different policy. That doesn’t mean that it’s actually different.


Where do you get that understanding? All the comments and the summaries I've seen by language team members on this give me a different impression. See this comment from Niko for example: https://github.com/rust-lang/rust/issues/53668#issuecomment-...

So, can you point me to a comment that tells me that the policy still holds and is as you are presenting it?


The language evolution RFC.


How do you square your interpretation with the comment I linked to above?


Not sure if there was nothing post-1.0, but this[0] particular mess was just before the 1.0 release and, at the time, caused a fair bit of chaos.

0. http://cglab.ca/~abeinges/blah/everyone-poops/


I remember that one. It came in just under the deadline and it would have been pretty bad if that got into 1.0.

The resolution of this not only had a notable effect on how Rust thought about memory safety (in particular, that leaking memory could not be considered unsafe as it's always possible to construct a leak using safe Rust if you have access to Rc, and as a result `std::mem::forget()` was changed from unsafe to safe), but also informed the way we used RAII going forward, as the fundamental unsafety was due to the use of an RAII value as a token representing external computation and was not itself used in order to perform that computation, which meant leaking the token didn't prevent computation from happening. In all other known cases of RAII, leaking the value is safe because without the value, you can't access the protected resources, but in this case the token represented a thread and the thread of course would continue to run after you leak the token. As a takeaway from this lesson, you can't use tokens to represent external processes, and probably shouldn't use them to represent e.g. hardware that needs to be put back into a known state when the token is dropped unless you have a mutex in there as well (which would cause deadlock if you drop the token and then reuse the resource, rather than sending bad commands that break the hardware). Instead, any time leaking the token would cause unsafety, you need to structure your code to use explicit scopes instead, such that the library can clean up at the end of the scope.



There has been other instances. One is as_mut_slice, which resulted in 1.15.1 patch release. https://blog.rust-lang.org/2017/02/09/Rust-1.15.1.html Another is borrow checker bug, which resulted in 1.26.2 patch release. https://blog.rust-lang.org/2018/06/05/Rust-1.26.2.html


I wonder if they could after some time make it a "harder" deprecation; fail by default, and enabled only with an `#[allow]` annotation.


That would still break existing code: you have to change it for it to compile. And if you are going to change the code, it makes more sense to change the code to use pre_exec in an unsafe block than to use before_exec with an #[allow] annotation.


Now that alternate registries to crates.io is available, hoping artifactory could add support for rust... - https://www.jfrog.com/jira/browse/RTFACT-13469


Note that Rust has supported mirroring ("source replacement") for a while now. https://doc.rust-lang.org/nightly/cargo/reference/source-rep...

This feature is for alternate registries that _supplement_ crates.io



Looks like thats a site for notifying you of new dependencies?

I think I'd prefer PRs like https://dependabot.com/


No mention of RISC-V support? It sounds like RV64GC finally landed (Hurra and thanks!):

"riscv64imac-unknown-none-elf and riscv64gc-unknown-none-elf targets are now on stable rustc 1.34.0 :slightly_smiling_face: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...

https://github.com/rust-embedded/wg/issues/218#issuecomment-...

EDIT: Mea culpa, I somehow missed that it _is_ in the release notes, but still, it's worth pointing out. I hope it'll find its way into Fedora/RISC-V.


Such a boring release. That's why I love Rust.


For me the best releases will be when it matches D/Delphi/Ada/Eiffel/.NET Native compile times and cargo supports binary dependencies.

The language as such is already quite good, in spite of one or other possible improvements on borrow checker ergonomics (e.g. callbacks).


Binary deps are on the Cargo team’s plans for the year, and compile times are always a focus. Working on it!


Looking forward to them. :)


> For me the best releases will be when it matches D/Delphi/Ada/Eiffel/.NET Native compile times [..]

Any ideas on when this will be? There has been talk about faster compile times for years now without that much apparent progress.


> There has been talk about faster compile times for years now without that much apparent progress.

Really? What makes you say that? Have you tried it?

    $ git clone git://github.com/BurntSushi/ripgrep
    $ cd ripgrep
    $ git checkout 0.4.0
    $ time cargo +1.12.0 build --release
    
    real    1:09.13
    user    2:06.08
    sys     2.839
    maxmem  359 MB
    faults  1292
    $ cargo clean
    $ time cargo +1.34.0 build --release
    
    real    22.484
    user    2:32.66
    sys     3.380
    maxmem  702 MB
    faults  0
That's a >3x wall-clock speedup over the past 2.5 years on a cold start. That's pretty good.


3X speedup is a definite improvement. I don't use rust regularly, so have been relying on community news for updates and while they have mentioned planned work to increase compilation speed a few times they haven't really talked about any landing (that I saw anyways).

Also cold start speed is not really what I care about. I am more concerned with compiler speed while working and running tests. I find that if a language compiler is to slow it breaks flow while testing changes. I haven't seen much mention of the improvements the incremental compiler gives in a while. Last I read was the 2017 blog post [1] during beta and it only showed modest improvements and more recently I only see talk of how it still needs a lot of work [2].

[1] https://internals.rust-lang.org/t/incremental-compilation-be...

[2] https://nicoburns.com/blog/rust-2019/#compile-times-especial...


Incremental compilation was a big improvement over the status quo.

I don't follow compiler performance developments. I'm just responding to clarify that there has been performance improvements. They have likely just built up over time. I don't think there was any one specific change that dramatically improved things.


The memory doubled?


I don't know about that specific measurement, but memory usage of the Rust compiler has not doubled in general. If it did, we'd notice immediately, as the script crate in Servo would probably OOM.


I wouldn't be surprised if it was due to paralellism, e.g., compiling multiple code units in parallel or even better parallelism at the Cargo level. My stat is just the maximum memory usage reported at any point in time. But that's just a guess. ripgrep itself uses more memory when using parallelism, just because of having more buffers.


Not sure if it’s the case here, but decreasing time complexity can often require a corresponding increase in space complexity.


We can only speculate about the lower bounds of compile times, but making things faster takes a lot of work. Rustc is a huge codebase, and making general leaps requires large architectural changes. We’ll just keep chipping away. It’ll take some time.

There has been a lot of progress, it’s just been slow and steady. For example, since the first of 2018: https://perf.rust-lang.org/compare.html?start=2018-01-01&end...


This will probably be a big win if/when it lands [0] (adding the ability to replace LLVM with cranelift)

[0]: https://github.com/bjorn3/rustc_codegen_cranelift/issues/381


This is only for debug builds though, cranelift. Isn't going to offer good enough performance for a release mode binary anytime soon (and probably never will).


If Cranelift makes debug builds much faster, then that's still a huge win. In my line of work, I do tend to compile things in release mode a lot purely because I often need to debug performance related problems, and for that, debug mode doesn't work. However, most of the test suites in my crates are built and run in debug mode. For example, the test suite for regex-syntax is fairly large, and it can take several seconds to build after making a change. Incremental compilation helped a lot with this, but there's still an annoying waiting period to run the tests. I'd be very happy to see Cranelift reducing the time it takes to run tests.


I 100% agree with you that Cranelift is a big deal (Last year I had to change my desktop CPU just to be able to build, in a reasonable amount of time in debug mode, the Rust project I'm working on − ok my CPU was still a Core 2 Duo at this point).

Even for things where debug mode is too slow, Cranelift could be a game changer since it promises to produce more performant binaries than LLVM in debug mode (Idk if it will be fast enough for you use-cases though).

I just wanted to point out that Cranelift won't solve the compile-time issue all by itself.


We will soon have async/await syntax which will replace some of the use case of callbacks, and (I assume) reduce the borrow bookkeeping tedium.


I don't think it covers GUI callbacks like on Gtk-rs, accessing internal widget data.


I think that's likely. async/await doesn't appear to map directly onto traditional OO GUI framework callbacks well, just like rust doesn't tend to map well to traditional OO. I believe that async/await will open up new architectural patterns that should be as ergonomic as traditional OO callbacks for GUIs. But these won't map well to existing code or existing frameworks.


There have been attempts, like relm, to map futures onto GTK. That means it would work with async/await too, as they’re fundamentally sugar for futures.


Even those are basically Relm components that happen to render via GTK. You can't really import a plain `GObject` and implement a traditional `fn on_click()` without having to worry about internal mutability and the associated borrow checking complexities.


Why binary dependencies, what for?


Compile speed, not compiling the same dependencies all over the time.

Many commercial use cases require distribution of binary libraries, Rust community might care about winning those customers, or just let them go and leave them to keep using the languages that fulfil such use cases.


> Compile speed, not compiling the same dependencies all over the time.

Sounds like caching these would be a simpler but solution.

> Many commercial use cases require distribution of binary libraries [...]

Good use case, and in this case another cache would solve the issue as well. Defining a protocol for binary caches and being able to add your own could solve this very well. The same solution could help solve the previous one too.

BTW, if you are in either case, have you looked into Nipxkgs[1]? They might be able to do both, the basic capabilities are there, not sure if the Rust infrastructure[2] already provides it.

[1] https://nixos.org/nixpkgs [2] https://nixos.org/nixpkgs/manual/#users-guide-to-the-rust-in...


And if you have to drop down to looking at the disassembly, distributing binaries helps ensure everyone's looking at the same disassembly. Reasons for this include investigating codegen bugs, figuring out optimization issues, ensuring functions involved in cryptography aren't leaking information through timing side channels, etc...


Personally I am very excited about TryFrom, I have been waiting for that forever it seems.


Agreed! I definitely have a handful of hand-rolled `try_from` definitions that I'm excited to erase from existence.


For the most part, that's true. But I am so excited that custom Cargo registries have finally landed!


> People maintaining proprietary/closed-source code cannot use crates.io, and instead are forced to use git or path dependencies. This is usually fine for small projects, but if you have a lot of closed-source crates within a large organization, you lose the benefit of the versioning support that crates.io has.

I'm surprised that making people run their own registries is the preferred way to address this, instead of implementing versioning support for git dependencies.

There's many bad things I say about golang, but "I wish this module system additionally required me to host a language-specific package repository" is not one of them.


I dunno. The worst part of Go in my opinion is that disaster that is packaging. I like rust because there isn't eleventy ways to do things like that. I can certainly see your perspective. I'm just of the other opinion.


What do you mean by "versioning support for git dependencies"? We do support pulling in git dependencies based on a branch (I think tags work?)

We also support checking the version of the git dep.

It's fetching the dep that's the problem. Git has no uniform protocol for talking about versions, especially when it comes to resolving SemVer ("you asked for 0.2.1 but I also have 0.2.5, here you go")

If you want go-style import resolution use a local git dep. That works fine. It's specifically when you want all the features of cargo's version resolution -- which git doesn't understand -- you need to use a custom registry.


"Versioning support" is the wording from the release notes. I don't know why cargo's version resolution logic can't be applied to git dependencies, treating a tag like "v4.2" or "4.2" as a version 4.2 doesn't seem very contentious. I also don't know what you mean by local git dep.


> I also don't know what you mean by local git dep.

Presumably, the `git` URL for dependencies, in `Cargo.toml`, can point to a local git repository. That should let you refer to a branch / commit of that repository, other than the one currently checked out (which is what using a `path` dependency would give you).


"local git dep" was a mistake, I meant "git dep", which can be used in a way that approximates Go's import system if you use versioned tags.

(The "local" comes from it being in-house but that's confusing)


Ah, that's cool to hear. Which features of cargo versioning am I missing out on then?


Actual semver based version resolution.

And yes, this _can_ be added with some ad hoc scheme that requires you to use a certain kind of git tag, but that's not what people are asking for.


There’s a ton of people who want a variety of solutions for a ton of things; many people require on-prem for this kind of thing. We’re building out a variety of solutions, this one happened to land first.


Sure, but they have probably on-prem git then anyway, now they need two on-prem things?


I don't think it's unheard of though: my employer has their own Nuget package server for internal packages.


It can be useful to run your own package server even if you're not creating your own packages. You can use this to keep a curated list of packages that your team finds commonly useful / has audited / has had the license blessed by legal / ???, or to be productive during an internet outage, slowdown, or lockdown, etc.


If there is no Internet, I'm going home. You can't seriously expect me to work without StackOverflow


It's also nice to not have your CI/deploys depend on third-party sites being up.


There's plenty I can work on without SO - that large backlog of things to document, for example. And just because your dev machines are on a locked down network doesn't mean you have no internet.

If perforce is down, however, I'm starting a riot ;)


It will be nice when Cargo dependencies can use pre-built binaries instead of having to compile the whole dep chain. One crate I contribute to has a 4 GiB target folder and takes 20 mins to compile from scratch...


In the meantime, you can use sccache to at least save the time spent to compile the same version of a crate more than once.

https://github.com/mozilla/sccache


thanks for the recommendation. if I understand [1] correctly, this cache is hosed when you upgrade rust?

[1]: https://github.com/rust-lang/cargo/issues/1139#issuecomment-...


Yes, everything needs to be rebuilt with the new compiler after a ugprade, but sccache still provides very valuable time savings when you work on multiple Rust projects.


See below, that is planned. Can’t guarantee a timeframe though.


reminds me of nixos early days, then they added binary channels, so much nicer


I liked the concept of alternate registries in this release


I imagine this will unshackle some people who have been wanting to use Rust within organizations.


That is one of the main motivating factors!


It feels like Rust is now 'stable'. Are there any major language related things in the pipeline?


The biggest one in the near future is async/await; you’ll see the precursors land in the next few releases. There will also be a blog post soon outplaying overall plans for the year, but const generics, GATs, and specialization are the likely big-ticket items.

In general, this year of Rust will be about governance refactoring, finishing off long-desired features like the ones above, and general polish. There isn’t a lot of big plans for big new things coming anytime soon, more of finishing what we’ve started and improving what we have.


> GATs

Oh, fantastic! I realize I'm probably in a very specific minority here, but I've been waiting on this one for literally years.


For people confused like me: GAT in this case means "Generic Associated Types"

https://rust-lang.github.io/rfcs/1598-generic_associated_typ...


This RFC has an explanation of Higher-Kinded Types that enabled me to understand what is meant by HKTs for the first time. Unexpected & awesome! Thanks for linking!


Huh... Safari on iOS crashes on that page


Interesting, it doesn’t crash for me!


I have ran into a couple places I needed GATs while writing abstractions, without realizing beforehand it'd lead to needing GATs. This is especially true with lifetimes, which I'm not used to thinking about as part of an abstraction.. I think it will naturally open up doors for everyone without anyone necessarily realizing it apriori.


To help people follow along, GAT stands for Generic Associated Types.


I think another big item is const fn and compile time function evaluation.


That’s under “const generics”, but yes. Technically const fn is already stable, it’s just expanding what it can do, generally, so we tend to talk about them as one thing.


I think it's not a good idea to talk about them as one thing and the use cases also differ. `const fn`s are deterministic ("pure") functions that can be evaluated at compile time if all arguments provided also can. `const A: B` generics are about compile-time value dependent typing. The former is important for the expressiveness of the latter but they are ultimately independent. Moreover, the implementation effort is also mostly independent (different people are doing the effort). Even having them in the same WG might not be a good idea.


In my mind const generics and const fn is a completely different feature. Proof: you can use const fn without any const generics.


Sure. They rely on the same internals, which is why they tend to be wrapped together when talking about them as a feature, that’s all I’m saying.

That also doesn’t change that const fn is stable today, so saying “it’s coming this year” muddies the waters a bit. You have to explicitly say “the capabilities of const fn will be expanded”, or you risk the wrong impression.


You mean expanding and improving const fn? Because the basis of const fn are already in stable since 1.31 and improved in 133.


Steve Klabnik's comment has already mentioned async-await, "const" generics (much like template parameters in C++, expected to be especially useful for numerics-like code), generic associated types/GAT (related to higher-kinded types as found in e.g. Haskell) and specialization (allowing narrower, more specific trait implementations to override broader ones; a not-so-ad-hoc, more elegant approach to the overall, broad issue of implementation inheritance).

On a different level, a lot of work has been planned to address compiler performance, improve IDE integration, and provide better support of special workflows e.g. for embedded development, or for WASM and the like. Work is also still ongoing on writing high-quality reference documentation for the language, and moreover for a better understanding of how exactly unsafe code should be expected to work, which in turn will enable a more formal approach to the Rust language as a whole.


Async/await is probably the next "big" feature.


We are really waiting for it in Prisma and might need to start working with the unstable soon until it lands.


Is Prisma releasing a client library for Rust?


We'll build the server first with Rust. Hopefully me or somebody else has time to do the library later. Would be fun, but right now all focus is on the server rewrite.


^ this.


There have been steady improvements for the FFI and macro system and I hope that continues. Both of those are important for C integration.


My main interest is Polonius borrow checker and two-phase borrowing.


Rust is legit next gen, check it out today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: