When I saw the work being done with adding web assembly support to rust I though it was cool but I never really put much thought into how far it could be taken. Seeing projects like this is really great and makes me realize just how much room there is for using rust for web front
ends. Given all the consideration for performance that the web has needed recently, I am excited to see what is possible with wasm and rust in the future.
Also, its amazing how fast this has popped up. Wasm was added to the compiler literally a month ago and already there is a standard library from dom manipulation and front-end frameworks. 2018 is definitely going to be interesting
Why does update(...) mutate the model instead of returning the next state? This would make update a function instead of a side effect mutating the model.
Returning a nextState in Redux/JS land is a workaround. Basically, one wants to avoid the situation where the state is mutated, but the framework does not know. This happens because there are multiple references to the state, and because, for performance reason, the framework is comparing the states based on referential (===) equality. Thus the view (in whole or in part) and the model can go out of sync. Always returning a new state object is an expensive way of ensuring this, but on the human time scales we're talking about, an insignificant cost.
On the other hand, in Rust, we have the compiler on our side. If one grabs a mutable reference to the state (&mut Model in the update function in https://github.com/DenisKolodin/yew/blob/master/examples/cou...), then the Rust compiler ensures that there are no other references to that Model. Thus, when the framework does `nextState = mutate(¤tState)`, one is guaranteed that there are no existing references to `¤tState`, and thus it is impossible for a shadow update to occur.
IMO, this is better not primarily for performance reasons, but because it is conceptually easier to mutate a state object instead of using reducers. (I haven't looked into this framework much, so I don't presume to speak about it's internal implementation, so this may be off in some framework-specific details.)
in a nutshell this the counterargument to functional purity: side effects can be done correctly/safely when you have/build the right tools. i'm inclined to agree with it.
It's not really a counterargument so much as another approach. Immutability and purity are still powerful concepts, but you don't need to use them for every single situation in rust.
In Haskell, mutable APIs are available but this is surfaced in the types. The idea is not so much that you shouldn’t have mutability, as that it shouldn’t look the same (and be treated by the compiler the same) as immutability.
Maybe Haskell goes too far in this regard, but Rust is in line with the same way of thinking, because mutability is surfaced and checked at type level.
FWIW, it's more of a "simplicity" choice - it is entirely possible to keep mutations reactive in React with libraries like MobX - it's just extra functionality that React doesn't ship out of the box but it works just fine without a borrow checker.
Incremental updates to large, immutable data structures do not require allocating a whole new state per update (despite maintaining immutability!) Mutation is still frequently more efficient though, at the cost of not being able to reason immutably about your program. But if you're doing something that requires multiple versions of your state (e.g. history, undo/backtracking), the immutable version is going to be more efficient and definitely easier to write efficiently in the first place.
I'll give you that it'll be easier to write but once you start working with value types(like Rust has proper support for) then you don't get the nice efficiencies that come when everything in the world is a reference.
Most efficiency gains of imperative programming languages like C or C++ come from the compact and continguous memory layout and usage of the stack which is usually in the L1 cache.
In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.
> In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.
It's not the same, you need to allocate new memory, if it's on the heap then (depending on your allocator/OS etc) you will be making a syscall, which means context switch, cache eviction etc.
This is where understanding the difference between theory and practice is important, by its nature anything in the heap is not going to be in L1 cache.
That means a difference of ~300 cycles before you can even start working with the cache line you pull from main memory.
it's not any more expensive than allocating a new object in Javascript. possibly less. if you want immutable models, allocation will most likely happen on mutation.
Idk about JS's memory model, but you can allocate the equivalent of a JS object in Java and Haskell very, very cheaply. I really don't think allocating a single JS object is expensive...updates to large immutable data structures should just require a few allocations (aka a handful of pointer bumps). Sure, it's technically more expensive than an in-place update to an equivalent large mutable data structure. But it's also not a fair comparison given one gives you way stronger guarantees about its behavior.
Except in those languages it can be just as brutally painful to allocate. Start modifying strings in the render loop on Android and see how quickly you get destroyed by constant GC pauses.
The only way to address it is with extensive pooling and heuristics.
Or you can just mutate.
Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.
Because in reality it is no big deal. Modern GCs are incredibly efficient.
When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.
There is a huge difference between “algorithmically efficient” (aka big o) and “real life efficient” (aka actual cycle counts). In real life, constant factors are a huge deal. Real developers don’t just work with CS, they work with the actual underlying hardware. Big O has no concept of cache, no SMP, no hypertreading, no pipeline flushes, no branch prediction, or anything else that actually matters to creating performance libraries and applications in real life.
There is a huge difference, you're also discounting GC. Haskell's GC for example is tuned to the aspects of the language, meaning it's pretty efficient at allocating and cleaning up lots of memory; it has to be, everything is immutable.
Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.
React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object). In React tutorial[1], you can notice the use of the `Object.assign` pattern which perform a new allocation.
However, most JS runtime have a generational GC so an allocation isn't remotely as costly as an allocation in C or Rust.
> React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object)
Afaik you don't have to. It just makes things easier since state changes can be detected through shallow reference comparisons. There is even the shouldComponentUpdate() hook to allow the user to detect state changes which did not trigger a whole state object change.
You save a single allocation for every state change in your app? Never mind the actual render cycle? And lose a simple reference check to decide if you can short-circuit a part of your app?
mutating in Rust is not like mutating in JS, Python, Java C# or many other languages. Rust tracks ownership, you always know who might mutate the object and when. It is a very unique experience programming Rust
This might be more sane in Rust, I am not familiar with Rust.
I have used a lot of redux in js and written a few things in Elm, and the immutible property of application state is very central in both, and an important part of being able to reason with the ui, as well as events happening concurrently.
Not really. Of course you can copy immutable data upon update, but persistent data structures (large shared state between data versions)[1] are hard to write without GC...
Reference counting [1] and atomic reference counting [2] are supported within Rust, and are sufficient for any persistent data structure without cycles. There's also a prototype crate for a mark and sweep GC [3], which would likely work for anything else. Writing such data structures using these features might be a bit more tedious than writing them in a GC language, but I suspect the increased explicitness provides a worthwhile tradeoff. See the crossbeam library [4] for some examples of lockless concurrent data structures implemented in Rust.
As an alternative GC in Rust, you also have Josephine[1], which uses the SpiderMonkey[2] GC. It's cumbersome to use though[3] and should probably not be used outside of Servo.
If you don't have cycles, ref counting can work (it's a kind of GC after all), but it indeed comes with a big performance overhead compared to a modern GC.
I’d say it’s not the immutability that’s so valuable, rather than restricting where it can be done. If the app state can only be mutated in one place in your app and under strict conditions, formal immutability no longer offers that much.
"You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."
The problem is that the banana has a jungle attribute. If you pass a bannana to a function you're also implicitly passing in the entire jungle. The function could mutate the entire jungle without your knowledge. If you remove the jungle attribute from the bannana class you have to explicitly pass the jungle as a function parameter. This makes it far easier to understand the code.
The same applies to return values. If you return every changed value from a function and then explicitly handle the change at the callsite it doesn't actually matter if the data is mutable or immutable because the side effects are visible and easy to understand.
I was imagining a React style UI framework but for desktop apps. The ingredients I have in mind are React, GraphQL, and Rust. I think these three solve the problems of their respective areas the best and combining them together would be priceless.
Before I started programming in Rust I had similar concerns when I saw people pushing rust as backend language choice. However having written a non significant amount of rust it does really well at feeling a lot like ruby/python while still achieving high performance and low level access.
This comes up in /r/rust periodically, and IIRC the answer is still pretty much "no". It is getting better, though, slowly. I'm on my phone and too lazy to find a link for you, but search /r/rust for discussions about GUI libraries/frameworks.
A typed query language will definitely help for anything you store either locally or remotely. In this case, it would provide information to compiler to do most of the type checking for you.
It is not only GraphQL itself I had in mind though. It is also about implementing a client library similar to Apollo GraphQL where you can provide watch queries wrapping your UI components. Watch queries provide great developer ergonomics for handling the flux data flow, that is, whenever there is a change to your data in store, if the corresponding elements in datastore are watched by a query, the UI components wrapped by this watch query are re-rendered with new data, automatically.
I think this combination will help desktop development by helping to reduce boilerplate code and by providing a convention to development team. We enjoy this combination already in web UI development for the last one year, in our team.
Native applications don’t need that, though. Modern languages embrace async/await and don’t waste cycles or block the UI or even the backend unnecessarily waiting for network/local IO or user interaction.
In that regard, rust is actually behind. Async/await is still coming.
I'd like to think it is more like when Oxygen entered the scene. Maybe we can never get rid of of Javascript, but like anaerobic bacteria, maybe it can stay put and out of sight under some rock.
color me shocked: this is really a rust web framework (compiles to wasm). i was expecting a backend framework that supported react development on the frontend in some non-trivial way. wow. very cool.
but i thought wasm couldn't manipulate the dom? is that not the case anymore?
WASM is just something that can run compiled code in the browser. You can communicate with the Javascript environment. You cannot access the JS libraries directly, but you can with thin bridges.
Using such bridges, you can create anything you can with Javascript in any language that compiles to webassembly.
Plus first class DOM support for wasm is coming at some point in the future, so I'd imagine any properly supported library could implement it once it is finalized.
I'm not sure I want this. I use WASM as a compilation target, but I rally enjoy the simplicity of the platform and how all connections from and to the Javascript world have to be explicit. Allowing more transparent access to the DOM would diminish this highly defined interface.
Wasm is specified in two layers; wasm itself and then the api the host provides. All of the "external surface" stuff is in the host, not in wasm itself, so wasm for another host doesn't get larger.
The story with Rust compiling to it is still younger; it works well at the moment, but is very fiddly. The early adopters and enthusiasts are working on making it all smoother. So for most people, it's still a bit young. Just depends on your temperament.
Actively doing something that predictably results in 7.6% less traffic to your profit generating website seems quite the questionable business decision.
Assuming IE11 users are otherwise representative of the customer base, multiply your total web based revenue by 7.6%. (If this assumption is wrong, then just check your reports to get the correct number.) If, for example, the business in question makes $10M yearly revenue, your estimate has 7 entire devs working exclusively on IE11 support fulltime, year round.
That is one fancy as fuck ecommerce site to be that reliant on features in modern browsers that lack automated transcompilation/shim generating tooling.
Are there actually IE11 bugs at a noticeably higher rate than bugs (or intentional deprecations) in other browsers? I thought we were just talking about a lack of new features that are present in Edge.
(If you were talking about IE5 or IE6 I'd understand the argument.)
most ie11 bugs that are relevant/annoying are around its flexbox impl. specifically equal height boxes via min-height and such. maybe some other fancy css animation or 3d transform stuff, but these are not terribly critical and can degrade without much impact.
thankfully, js and dom bugs are either rare or well-known and have lightweight polyfills.
would you find it acceptable if 1 of every 13 customers was unable to use your ecommerce site after you paid good money to acquire him/her via ads?
7.6% is a huge number. we dont even start discussions until traffic is < 2% (and also a huge pain in the ass to support). IE11 is not actually that terrible.
7.6% can be absolutely HUGE. I have to support Samsung "The Internet" version 2.1.... and that's about 1% of traffic. But the amount that that still brings in in a single month is enough to pay a developers yearly salary.
Same reasons that IE versions only support certain versions of Windows:
- they want to be able to switch to newer APIs when the underlying OS adds them (though in Edge's case it's more likely it was written from the ground up using newer APIs)
- they quite possibly want to use "you can get the new browser only if you upgrade" as a carrot for OS upgrades (they explicitly did with IE, I haven't seen anything explicit for Edge but it wouldn't surprise me if they're still taking that approach)
Things have been changing, people thought that the DOM stuff would need the GC stuff, but the new "host bindings" proposal my sibling linked to would give DOM stuff without the GC stuff.
Everyone I talk to suggests that it will be sometime next year.
Aside from the fact that a single thumbs up doesn’t qualify as a comment by HN standards, select Unicode characters are whitelisted but the rest are sanitized.
Maybe I don't get it, but why would I use this? What problem is it solving that insert JS framework isn't?
I understand JS is popular to hate, for a variety of reasons, but those reasons seem mostly to boil down to nerd cred. The "hate what's popular because being contrarian means I'm cool" crowd.
So, and this is an honest question here, why should I invest the time to learn an entirely new syntax for a framework to build web applications? What can this do that vue.js or react cannot? (And I don't mean it does it differently I'm looking for it does it better or JS framework doesn't do it.)
> I understand JS is popular to hate, for a variety of reasons, but those reasons seem mostly to boil down to nerd cred. The "hate what's popular because being contrarian means I'm cool" crowd.
Then you're not paying attention. I'm not sure how you expect anyone to answer your "honest question" when you've already dismissed all the answers. It's not just fashion, programming language design is a real thing that it's possible to do badly or well, Javascript is a legitimately bad language, and Rust is a legitimately good language.
Learn Rust or another ML-family language, actually learn it to the point where you can write a decent, full-sized, idiomatic program in it. It won't be easy, but it's worth it. I mean, I could talk through all the reasons such languages are better, but it seems like you're already determined to dismiss them, so really the only way is to see for yourself.
> Javascript is a legitimately bad language, and Rust is a legitimately good language.
Yeah, statements like these without even trying to back them up are pretty ridiculous.
Sometimes someone brings some stupid issue that's never a problem in practice, but most of the time people just admit they simply have no real experience with JS. They just love to claim how superior they are, and anyone who disagrees is "not paying attention".
FYI you are just showing how awful are communities around "legitimately good" languages.
I'm willing to back it up. I've written at length on these things before. But there seems little point when the person I was replying to has already pre-emptively dismissed any reason I could give.
> FYI you are just showing how awful are communities around "legitimately good" languages.
Just to let you know Rust's community is pretty cool and not awful at all. For instance the Rust Survey 2017 reports that 98.7% of respondents feel welcome within the Rust Community.
Furthermore according to the Stackoverflow 2017 survey, Rust is the "most loved" language.
Sure. Language design is has tradeoffs but Javascript makes a lot of unforced errors, cases where the consensus was already established and Javascript went against it. (In fairness it was a single-application scripting language written by one guy in three days, not a language carefully designed for general-purpose use by a panel of experts).
Non-lexical scoping is awful, everyone knows it's awful. Recent versions added a better-scoped "let" which is progress in a way but now means you have two different kinds of declarations with very different kinds of scoping. "this" in Javascript is just entirely useless, confusing semantics that resemble no other remotely mainstream language. "Prototypal inheritance" is a mistake, again no other mainstream language uses it for good reason, and again newer versions of Javascript have added "class" to correct this but that leaves the language with two radically different ways of accomplishing the same thing. "undefined" manages to be an even worse version of the billion-dollar mistake, null; errors become apparent even further from where they occur ("foo is not a property of undefined" failures a long way away from whatever caused the value to be undefined), where what you want is fail-fast. Extremely loose value coercion at runtime is terrible for large programs in a language without a type system; even at the time Javascript was first created, Python or TCL knew this. (Perl didn't, but avoided the worst of the problems by not having so much operator overloading, e.g. using . for string concatenation). Indeed lack of any kind of type system is pretty indefensible in a general-purpose language these days, even Python's getting type hinting, though this was less common knowledge back when Javascript first came out. The language is far too dynamic for working with mixed security contexts even though it's the most popular language for that very use case, forcing browsers to resort to crude sandboxes and coarse-grained permissioning instead (e.g. you can give a site access to your webcam/microphone, at which point all the ads running on the page have access too). And there are a bunch of small usability niggles (the "wats" you see talked about), none of which is important in isolation, but developer-friendliness does add up.
Rust is a pretty conservative ML-family language design: where it differs from OCaml it's because of deliberate design decisions. (In fairness some of Javascript is descended from Smalltalk which was more respected at the time than it is now). Really most of what Rust does ought to be table stakes for language design (and it kind of is, looking at e.g. Swift or even Kotlin), should have been since the 1970s when ML came out: to quote Wikipedia, "Features of ML include a call-by-value evaluation strategy, first-class functions, automatic memory management through garbage collection, parametric polymorphism, static typing, type inference, algebraic data types, pattern matching, and exception handling. ML uses static scoping rules." (Rust may appear not to have exceptions, but ML "exceptions" have more in common with Rust's panic/recover than with Java-style exceptions). These things are small and simple but very useful and general, and allow you to push a lot of work out into libraries written in plain old "userspace" Rust code (though this would be much more true if the language had HKT, grumble grumble) rather than having to "bolt on" ad-hoc features at the language level. The only novel-for-mainstream language-level feature in Rust is its ownership tracking (i.e. the borrow checker), which is about the right amount of innovation for a programming language to have, and by all accounts is working well. Good language design is as much about leaving things out (of the language itself, pushing them into userspace code) as it is about putting things in.
Simply put, at the moment, you wouldn’t. But when wasm is mature enough (host binding to control DOM directly), there will be framework for 10x performance (compares to js’). This framework is a step toward that direction.
Also it provides options for Rust programmers to not touching js. Totally worth it.
The reason people like Rust more than JavaScript is that Rust is a better at what it tries to do than JavaScript is at what JavaScript tries to do. That said, they don't try to do the same thing, so you shouldn't use Rust in all situations. Rust is focused on speed, safety, and low bug count, but it's slow and annoying to write and refactor. If you are willing to make that trade-off then do; if not, then don't.
Well it is still mostly JS. Claiming JS is crap (often stated as "beyond repair") and TS is good just because of (arguably) light addition shows some serious cognitive dissonance. But nevermind:
> The fact that it can be transpiled to Javascript is irrelevant
For some of the same reasons you might want to write your back-end in JS, but going the other direction. You could share code between front-end and back-end, but have a very strongly typed, secure, fast, and less prone to bugs language to do it in. Not everyone's cup of tea, but if you're already got a lot of code written on the back-end, this might make a lot of sense.
You wouldn't invest time and energy to build web app in rust, but you would be able to write your frontend in rust instead of investing time and energy in $JSFRAMEWORK of the year.
I'd imagine that it's useful to be able to share a data context between a route-handler, for example, and it's rendered template and the embedded client side "scripting" that may be quite extensive these days.
("scripting" in quotes because of how rust embedded in the html templates can be so much more than logic-less templates- a bit like a server side template, the dom, and clientside javascript all in one- and I'm just guessing, but because it's not dynamic, but compiled, nonetheless safe even though logic is in there...)
Also, its amazing how fast this has popped up. Wasm was added to the compiler literally a month ago and already there is a standard library from dom manipulation and front-end frameworks. 2018 is definitely going to be interesting