Hacker News new | past | comments | ask | show | jobs | submit login
Single Page Applications using Rust (sheshbabu.com)
886 points by rkwz on Aug 11, 2020 | hide | past | favorite | 482 comments



From the article:

> Building UIs by composing components and passing data in a unidirectional way is a paradigm shift in the frontend world. It’s a huge improvement in the way we reason about UI and it’s very hard to go back to imperative DOM manipulation once you get used to this.

IMO, UI structure and state management of the UI are orthogonal concerns. Trying to make these part of the same problem - a 'Component' - is how you wind up in the Angular-style trap where you can never quite cover 100% of the use cases, so you have to keep adding new leaky abstractions to compensate for prior ones. Passing events back up the chain is a particularly egregious constraint when you are trying to do things with the UI that the modeling does not agree with on paper (i.e. modals, status bars and nav menus are children of whom? One big global component?).

I have made this realization recently when working with Blazor. The statefulness of a UI is a completely different thing from the visual presentation of the UI. We do one-way binding into UI (e.g. foreach over collections to build table rows). But, stateful concerns are injected as scoped services (i.e. per user session/request). UI events are passed into the state service methods directly, and the UI is redrawn on top of the updated state. Each state service can only be mutated by way of one of its public methods.

This approach allows for us to build & test our UI state machines to deterministic levels of correctness without a single line of HTML/JS/CSS or manual user testing being involved. Once we have the state machine figured out, we can build whatever arbitrary UI we want on top of it.


Have you used React? It also uses a "Component" approach, but it doesn't suffer from the same abstraction issues as Angular. Your approach seems basically equivalent to the "redux" approach except with an OO flavour rather than a functional one.

Personally I've found a mix of "component state" for things like UI state (e.g. is that modal open? is that checkbox checked?) and "external state" for actual product data (e.g. array of users) works pretty well.


It's barely even lib-dependent -- we use rxjs instead of redux; react hooks have a useReducer now, and the "reduce" function isn't a new idea so it shows up in many more places where it can easily trigger events. I'll often use this approach along with a "controller component", which manages the less directly visible components. It's common already for things like routing.


I have not played around in the React space at all. I do agree that what you describe sounds similar to my ideology. The part that I would disagree with is the coupling of _any_ state with a UI component.

What happens if you want to move the confirmation checkbox from the modal back to the main view? Do you have to edit 2 state machines or 1?


You don't edit the "state machine" at all. As long as the component is reflecting the state (which happens automatically), you wouldn't have to make any changes other than the moving of the checkbox.


I’d go further in pushing back and actually say I think the single global state store (or stores) like redux and others are an anti-pattern.

You actually do want your state to live inside components. The examples are numerous, but it basically all comes down to composability.

Here’s a recent example:

You build a nice “autocomplete” for your site. In redux, you’d have it all in some namespace perhaps. But now you realize you need a second autocomplete somewhere. What do you do?

In the component model, you’d have already written the state such that there’s an “autocomplete state managing component” that accepts the right props to initiate it. Adding the next autocomplete widget now is as trivial as adding a component. But in the global/redux model, you would be far more likely/prone to not only couple the initial props and logic to the initial data, but you may not even have designed any aspect of it to be composable because you never considered it to be independent. How do you extract your autocomplete and publish it for other teams to use? It’s not possible, unless you do a lot of work and force them to use redux as well.

This may sound overly architected, but in fact it’s less so in the React world. Redux is the additional dependency, and additional and different layer of logic. Keeping within the component model is simpler and grows and composes more easily. Especially once you get into things like parent/child components that want to share state specifically for that sub-tree.

My take here is global stores are a bad solution for UI where the goal is tree-specific composability and generally building flexibly re-usable lego blocks is the goal. Redux is an anti-pattern that is simpler initially but encourages bad patterns, calcifies your stack, and ultimately slows you down as the app grows.


Agreed. But redux is useful for some things like data on the currently logged in user, or the state of feature flags. These are bits of data that you may well need all over the place, and pulling them in from redux is easier than threading them through components as props.


Redux works through a first-class React feature -- the context api. You can utilize the context api yourself to create modular state wrappers that can exist side-by-side rather than being restricted to a single global store.

https://reactjs.org/docs/context.html


At that point you now have the downside of writing reducers (verbose, unintuitive) and the dependency on a large library with a lot of customization to get it even close to syntax you'd prefer.

Namespacing doesn't solve what I'm talking about here completely, you'd still be writing state away from the components that use it and in a reducer form, plus there are numerous pitfalls to doing this in redux (you'd end up with a whole toolkit of addons to get it to be what you want).

I'd recommend looking at something like zustand or recoil, both which are headed in the direction we should be at. There are others, check out dai-shi's work on a few different state systems (use-atom looks decent).


One, since state lives within each component and can be passed as props.


> What happens if you want to move the confirmation checkbox from the modal back to the main view? Do you have to edit 2 state machines or 1?

Could be either, it's up to the dev to decide at what level handle that state.

Often with redux you use a single global state


See also [Ephemeral state vs. App state](https://flutter.dev/docs/development/data-and-backend/state-...) in the Flutter docs for a great description of this.


What you described is exactly what I’ve been doing when building UIs. All of the logic of the front end application is completely outside of the view layer. I think this coupling of the view vs. the front end application is bad, and it’s the most common default mode.

Then, the view has only minimal logic. Like you mentioned, it does require some code, like mapping over an array to create the individual rows of a table. You also need one level of if branching, because sometimes you need to show or hide a UI element based on some state.

But that’s it. I don’t even render components when testing generally. The setup code for actually rendering UI is very brittle, and when was the last time your bug was that you called the wrong function in response to a button click? The small layer between the UI framework and the brains of the application is best tested by actually looking at the rendered view anyway. Its completeness is qualitative.

Anyway, I’m really fond of this approach and wish more front end devs were open to doing it.


I find the reference to Angular confusing here, since generally in an Angular app you will keep a single client’s state concerns in Services that hold the results of API queries, with those results flowing into the UI reactively. You only pass input into Components from other Components for transient, presentational UI state.


React team seems to have realized the same thing, and their solution is to what they call React Hooks. It allows devs to write the implementation of the business logic imperatively, but the calling it is effectively declarative.

The hooks are then used to split out the business logic of the app from its view.


This reminds me amazingly of writing HTML and binding JavaScript to layout to handle events.


It's not quite the same. The main difference is that the data-binding is only one way, so it make the whole thing a lot easier to reason about.


Excessive property drilling and manual event bubbling is problematic but only really commonplace in small projects. It’s recommended to use a pattern like you’ve described leveraging Angular’s dependency injection to utilize services for managing your application state.

Judging from your other comment about Blazor I don’t see anything you can’t do in Angular.


I am interested in understanding this further -- is there a write-up or explanation with example anywhere regarding how you achieve this in Blazor.


As far as I am aware, there is no public document on the internet which describes this approach in a consolidated way. Unfortunately, I cannot share examples from the codebase I am currently operating on. That said, I can offer some MS documentation links in hopes that this paints a slightly better picture for how I achieve this outcome.

This describes how you would request a scoped stateful service from an arbitrary Razor Component:

https://docs.microsoft.com/en-us/aspnet/core/blazor/fundamen...

This describes how you would inject these stateful services into DI in a way that scopes them as desired:

https://docs.microsoft.com/en-us/aspnet/core/blazor/fundamen...

Note that we are using Server-Side Blazor, so this approach works for us. I do not think it works with WASM (at least out of the box), but we do not care for this form of Blazor hosting right now, so its not a concern for us.


Note that we are using Server-Side Blazor, so this approach works for us. I do not think it works with WASM (at least out of the box), but we do not care for this form of Blazor hosting right now, so its not a concern for us.

Hmm interesting, how does that work? What enables the interactivity in the browser?


Blazor Server downloads a small js script (SignalR) to the client on connection. All user events are sent back to the server through SignalR, then DOM changes are sent back to the client based off of user input.

I use it at work for internal intranet web apps so it's pretty handy for our use case.


> Once we have the state machine figured out, we can build whatever arbitrary UI we want on top of it.

You can. It's probably still better to use a comfortable UI library to help, and if you're doing your app state transitions in a central place, pick a view library that works well with that pattern (React, its siblings, or Vue for example).

In a toy app, I recently tried updating some DOM using the unidirectional data flow model, but without a UI library. It was horrid. The DOM really fights you in this regard: it's pure OO and stateful itself so you're better off using a library to tear down and rebuild your DOM for you.


what is the appeal of Blazor (as for that matter yew), when now there's a healthy ecosystem based on Typescript for the front-end? Is it just the ability to never leave C# (or rust)?


Productivity is the biggest appeal. 99% of our experience with Blazor involves writing some Razor-blessed HTML, taking dependency on state services and plugging things into DI. Our existing business services are simply consumed by the UI state services, so there isn't even any code involved for wiring this part up aside from some AddScoped<T>() calls in your bootstrapping code.

Things that we do not worry about anymore: Javascript, controllers, CORS, JSON, HTTP status codes, API documentation, 3rd party dependencies.

I say we don't worry about javascript and that is true, but I want to be clear that there is still a tiny amount of javascript involved. We do maintain a single "BlazorExtensions.js" file which covers all js interop needs across the entire solution. This file is currently only 120 lines long. So, while we do have javascript, we absolutely do not lose sleep over it anymore.


> Things that we do not worry about anymore: Javascript, controllers, CORS, JSON, HTTP status codes, API documentation, 3rd party dependencies.

That's a good looking list of things not to worry about. Especially CORS, the bane of my life.. the fact it's pre-flight has caused me so many problems in my (tiny) UI responsibility.


I've lost approximately a week of my life to it. Never again.


How does Blazor avoid CORS issues? It's still running client-side, right? And thus would be subject to the same CORS restrictions as JavaScript.


How is the performance? Is this for an internal application, or a public facing one?


Performance is good, but I would only use it on well-bounded target audiences. Anything over 10000 simultaneous users on a single instance would probably start to concern me. It is perfect for any internal or B2B UI you might want to build.

That said, combine Blazor with other Microsoft magic like Orleans, and you can start to horizontally scale. Might even be possible to make Server-Side Blazor viable for mass public consumption using this approach.

Edit: It does appear this would be fairly easy to rig a test for as of latest versions of everything. Should be able to throw Blazor and Orleans on top of the same IWebHost instance.

https://devblogs.microsoft.com/dotnet/orleans-3-0/#co-hostin...


Yes, there is always some value in having the same language client and server. Both in terms of being able to have shared types and functions, and less cognitive overheard when switching from client to server.


I like SwiftUI. Haven't done much with it yet, just a few small experimental apps, but I like the concept. Especially 2-way binding.

Not being able to create components is a problem. How else are you going to divide and conquer? So you will need components for any reasonably complex UI. So yeah, the challenge is how to do components in a good way, not how to avoid them.


I really wish there were a Blazor for Java or Rust or anything other than C# really.


Did you have a look at LiveView for Elixir/Phoenix?


The promise of Rust is safe and fast low-level development. What is the primary selling point of using rust for webdev? Speed? I can mainly see disadvantages when using rust for this scenario.


A few advantages:

- Rust compiles to WASM, not JS, so you can actually take advantage of its low-level nature in the browser for performance critical tasks within your app.

- The Rust code you're writing is still statically compiled, so you still get to take advantage of Rust's typechecking and IDE features.

- Some people just like writing Rust code more than Javascript. Part of the reason Javascript took off so quickly on the server was because there's a huge productivity boost from using one language everywhere. Similarly, if you love Rust but think that Javascript paradigms around prototypes, closures, or `this` are weird, you get to ignore that and take advantage of one of the best app distribution platforms in the world without learning a new language.

I pretty solidly hold to the position that increasing language diversity on the web is a good thing. Particularly with Rust, since they've put in the work to have proper, accessible support that's still separating app logic from DOM layout and CSS styling. I'm really happy with how their community has approached building out Rust as a first-class language on the web rather than just as native language that happens to have a web compile target.


> you can actually take advantage of its low-level nature in the browser for performance critical tasks within your app.

Yeah, but the problem is that 99.9% of front-end work is only performance-critical for DOM rendering (the 0.1% is stuff like video encoding & decoding) and WASM can't help with that.


> 99.9% of front-end work is only performance-critical for DOM rendering

There are two meanings of "DOM rendering" in the context of UI-as-a-function-of-state libraries: 1) generating the virtual DOM from data, and 2) modifying the real DOM to match it. Arguably there's even a 3) browser reflow as a result of those DOM changes.

You're right that 2 and 3 can't really be helped by WASM. But 1 can, and while it's not usually the bottleneck, it certainly can be. At my last company it was not terribly uncommon that fixing UI jank came down to eliminating unnecessary React render function calls because the sum total of them all - running the actual JavaScript logic - was taking too long. Assuming yew computes the virtual DOM in WASM (I don't see how it could be otherwise), the performance increase could definitely be beneficial for certain highly-complex apps.


Switching from JS to Rust will not give you a noticeable improvement in runtime speed, especially since V8 is absurdly well-tuned. If your JS logic was the bottleneck it's a good sign that the code was poorly architected and Rust cannot help you with that.


Yew already seems to out perform most other frontend frameworks and view libraries[0]. More importantly it does this while still paying the serialising cost when moving from WASM to JS and back. This cost is set to go away when WASM to JS interop improves and will make Yew even faster.

0: https://github.com/DenisKolodin/todomvc-perf-comparison


Rust compiled to WASM is still vastly faster than JS (often even by about 10 times or more). But yeah you probably should be fixing your JS at that point if it matters that much.


Unless your claim is that WASM isn't actually faster than JavaScript - which I haven't personally verified but seems like a pretty shaky argument - you're not really making any sense.

When you've got 10,000+ component instances on a page, tiny bits of render logic add up. You can identify whether actual JS logic is your bottleneck (and to an extent, which JS is your bottleneck) through profiling.

The most common fix is to avoid calling render functions at all where possible. These are cases where the output of the render function will be identical to the previous output - which means React won't make any DOM changes - but where the render function will itself get called anyway. You can prevent this through better dirty-checks on props and state (shouldComponentUpdate, in React's case). Though if you're not careful, even those comparisons can become a limiting factor (not often, but sometimes). Immutable data structures like those in Immutable.js and pub/sub component updates like what MobX does can help ensure comparisons don't get expensive.

Another trick is to do expensive operations like mapping over a large array ahead of time, instead of doing it on every render. Maybe you even need to perform your array transformation in-place with a for loop, to avoid allocating-and-copying. This is especially true if you do multiple array operations in a row like filtering, slicing, reducing. Memoization of data that gets re-used across renders is a generally helpful pattern.

Another huge factor in this case is concurrency: JavaScript runs on the same browser thread as reflow and all the rest, meaning that all of this JS logic blocks even non-JS interactions like scrolling and manifests very directly as UI jank. React rendering cannot happen in a worker thread because React requires direct access to the DOM API (element instances, etc), and worker threads cannot share memory directly with the main thread; it's message-passing only (the upcoming React Concurrency project will help alleviate this problem, but doesn't directly solve it). Rust, on the other hand, can share memory between threads, meaning that in theory (assuming Yew takes advantage of this) renders can happen in parallel. Even if they don't, WASM already lives in a separate thread from the main DOM, which does mean it will probably incur some constant message-passing overhead, but it should never block reflow. And that would go a very long way towards preventing user-facing jank.

The average web app doesn't run into these problems, and usually they can be optimized around, but when you do run up against these limits any across-the-board speed improvement that raises the performance ceiling can reduce the amount of micro-optimization that's necessary, reducing the cost in developer time and likely improving readability.


That is exactly what I'm talking about. Everything you referenced is an architecture problem, not a JavaScript problem. You can write slow apps in JavaScript+React, you can write slow apps in Rust. Unless milliseconds matter (doubtful in browser-land) you won't get the kind of performance you want for free just by switching languages.


Milliseconds absolutely matter. Every React render cycle (the entire relevant subtree from top to bottom because again, it's all synchronous) that takes more than 16 milliseconds presents to the user as a dropped frame. In fact that 16ms also has to include any DOM updates and the resulting browser reflow, which you have much less control over and are usually more expensive than your JS, so really the window is smaller than that.

Also, many of the optimizations I listed above can make code less readable in small ways. Many of them are things you should not do eagerly (premature optimization is the root of all evil), and should only go back and do once you've identified a specific problem. If an across-the-board speed increase prevents them from ever becoming problems, that's a win.

If you're thinking I'm a JS-hater, you're wrong. I think JS is a good language and I love using it where it's appropriate. But there are some usecases that benefit from a faster technology, and it's absolutely bonkers to try and argue that that technology shouldn't exist because "you can still make something slow with it if you really try".


16ms with 60fps targets, but there are higher frequency displays hitting the markets so that target might shift to 8ms or lower in the future.


60 fps is rarely needed in web apps.


<60fps results in a noticeably worse user experience, especially when scrolling. "Need" is relative, but I was really just trying to make the point that milliseconds matter: even if we cut that in half to 30fps, that's still 32ms maximum per render cycle.


What scenario can you think of where you have state updates while the user is scrolling?


Here are a couple cases we ran into:

1) We had a querying interface that would allow the user to construct a query and then it would return potentially thousands of results, asynchronously, over a websocket. We only displayed the first 300 on screen at a time, but sometimes the updates would come in so rapidly during that first 300 that one render wouldn't be finished before more results were available and the next render triggered. Things would get really backed-up and the UI would hang for multiple seconds at a time. So we decided to throttle the rendering - get the results as fast as possible, but only render once every 500ms or something. But during this time the user still might want to scroll around and look at other parts of the page, so we didn't want it to pause for 100ms every 500ms. It was a constant battle to keep things responsive while all this was going on.

2) We had another screen that would load another massive list (thousands and thousands) of entities as soon as you visited. These also came in gradually over time to spare the user from waiting for the last result before they could see the first. Similar deal: we throttled, but we needed to keep things responsive even during that throttling because the user would be scrolling up and down the results as they were coming in.


Both of these sound like side-effects of trying to render massive amounts of UI artifacts that the user doesn't actually need or want to see. Whether you use JS+React or Rust+Yew you're going to get yourself into trouble with firehose-style data handling.


Our users specifically demanded to see this much data. They actually complained that we couldn't show more. Once again you're making bold claims about a subset of use-cases that you clearly have no context for commentating on.

I never claimed that switching languages would have magically solved all our problems, but in our case it could've been a significant boon to performance which could've been one factor of many that contributed to a solution, and I just can't figure out why you have such a problem with that idea.


I'm an old 3d graphics programmer. One of performance rules is only render what will be displayed in that frame because rendering is expensive. That is why there are various culling algorithms, some are quite complicated. The reward for implementing all these advanced culling algorithms is very fast frame renders that give you more time to draw special effects, draw more characters, etc in a single frame.

I am a data engineer now and sometimes have to build web interfaces for scrolling large data tables. Just like in the 3d graphics world, the trick to responsiveness is culling. Only download and display the amount of data that the user will see at a given time, plus some overhead so the user never sees gaps when scrolling.

While Rust may have solved the problem in #1, it is unlikely to solve the problem in #2 because you are also fighting the network where Rust is useless. Need culling.


Unfortunately because web content is automatically positioned/sized in a way that's (mostly) opaque to JavaScript code, this is a pretty hair problem (compared to graphics where you have all the numbers on-hand). So most of the time you'd either have to a) try and replicate the math the layout engine ends up doing which makes things more brittle, or b) directly measure the computed dimensions/positions/visibility/scroll position of different elements, which tends to be a pretty large performance liability and in many cases would eliminate any gains you might receive.

There are some edge cases where these techniques can work - like if you have a really simple layout, and you can give a fixed-height to every item (table row or otherwise), and you assume the user is never going to resize their browser window - but it isn't nearly as obvious of a win as it is for 3D rendering and we just never decided it was worthwhile to try going down that rabbit-hole.


> Unfortunately because web content is automatically positioned/sized in a way that's (mostly) opaque to JavaScript code, this is a pretty hair problem (compared to graphics where you have all the numbers on-hand).

Exactly the same for 3d graphics. The auto-positioning is different but conceptual similar.

Yes B is my recommended approach. Since we are talking about performance, we should really be putting limits on what fast and slow mean. That is because it doesn't really matter if something is fast or slow, what matters is the performance difference between different approaches.

For B on average, performance will in the microsecond range, DOM rendering will be in the millisecond range, and the network is >100ms to seconds range. It is unlikely that B will be equal or slower to brute force rendering everything. Those X microseconds spent figuring out the culling window saves Y milliseconds rendering and saves Z milliseconds to seconds querying the network.


It's not that I have a problem with the idea, it's that I've spent most of the last decade building data visualization apps on the web and have never truly hit a wall with respect to JavaScript's performance characteristics. I think of it like this: no user is visually processing data faster than React can render it. If you render 300 elements and then want to render more updates before those first 300 had finished, did you need to render those first 300 at all? A human can't visually process hundreds of elements that quickly. The problem is that you're attempting to render a bunch of data that's not actually what your user is trying to see, not that React can't render it fast enough.

Most "performance" problems on the web are solved by simple techniques like virtualization, pagination, pushing expensive calculations to the server, avoiding round-trip requests, or basic UI/UX improvements. The only time I would care about JavaScript's actual runtime performance if I were doing something crazy like 3d rendering in the browser and that's a completely different set of problems than React or Yew are set up to handle.


The constraints on data visualization are not the same as the constraints on an IDE, which is closer to what we were building. Also, our users could definitely "process" the data faster than it could be rendered, because what they were doing most of the time was skimming through it for points of interest. They were not sitting and deeply considering each data point one at a time; they could glance at a page and know in a moment whether what they were looking for was there, or whether they needed to scroll further, or whether they needed to tweak the query and re-run it or click a link to a different screen. It was muscle-memory for them, and the tiniest hitch was frustrating to the user experience.


60 fps should be the bare minimum given the kind of hardware we run these days.


No it should based on the need of the use case, as always. Very few websites, even highly dynamic ones that require React, need to be rendered at 60 fps.


> the 0.1% is stuff like video encoding & decoding

I would use WebGL and WebGPU for that, and AssemblyScript for WebAssembly, so nothing that Rust would make a major difference.


Avoiding 1 is actually one of the selling points for Svelte.


According to my (possibly flawed) understanding it doesn't eliminate it exactly, but does cut down on it a lot by doing as much work at compile-time as possible


> the 0.1% is stuff like video encoding & decoding

In which case, rust still isn't a good fit for a front end library (Unless you are patching in support for a new codec into old browser).

Frankly, the browser will have a faster version of the codec that can also take advantage of other hardware that you wouldn't want to expose to WASM.


Have you used the modern web? We might not be building apps with "performance criticial tasks", but the modern web would benefit immensely from anything that improves baseline performance.


Yes, I use it all the time, but most websites are really simple things – for every figma.com there are a thousand e-commerce sites that do very mundane things.


Precisely

Talking about perf without understanding the problem domain is a common mistake Rust folks tend to make.

I don't think there is enough push for people to switch to a new language just for performance sake in front-end land. JS ain't that bad, and performance is certainly achievable by careful and incremental optimization.


You should check out Glimmer and what they're doing with their byte-code components – I think the Glimmer vm in Rust would boost DOM rendering performance vs virtual dom implementations.


I would rather have a fast way to debug than speed execution when dealing with single page apps. Execution speed isn't a concern, it's payload size and TTR speed that we care about when working with a SPA, and this wasm code is over 200,000 lines.

What a nightmare to debug!


Exactly! Practically speaking, rust for SPAs is useless, at least for now.


> Speed?

As surprising as it might sound -- not really. There's a small-but-non-trivial amount of overhead involved in calls between WASM code and the DOM API's, plus (de)serialization.

It's more of a familiarity/existing tooling thing. Same as Blazor in C# writing web apps in that. If your entire team only knows C# and you have all your existing tooling there then it could seem an appealing option.

But there's also the argument for more stringent safety with Rust compared to IE, Typescript.


> There's a small-but-non-trivial amount of overhead involved in calls between WASM code and the DOM API's, plus (de)serialization

My understanding is that this is going to get a lot better in the future though -- last I checked the plan was for WASM to eventually have direct bindings to DOM APIs, at which point you won't need to interact with JS at all.


That's true, if the Interface Types proposal gets accepted that overhead should be significantly less. At that point it'll get really interesting what the potential performance looks like for DOM-interaction heavy apps.

https://github.com/WebAssembly/interface-types/blob/master/p...

I wouldn't think it's far behind. The Multi-Value proposal, which had huge ramifications, was recently accepted in April so WASM is clipping along at a fair pace still.

https://hacks.mozilla.org/2019/11/multi-value-all-the-wasm/


Firefox Nightly and Chrome Canary both recently landed multi-value and reference types: https://webassembly.org/roadmap/


This is true if you are writing code that has a lot of UI interaction, but one sweet spot for WASM I've found is if you just want to do a bunch of computation on the client and surface it. For example, I wrote a SPA for generating crossword puzzles[1]. Maybe I'm wrong, but I don't think I would have been able to get the same performance in JS even with modern optimization.

1. https://crossword.paulbutler.org/


That's really cool, thanks for sharing it. It found a 16x16 with a lot of complexity incredibly fast!


Nice crossword generator!

Now I'm somewhat tempted to write a js implementation to actually compare.


I was very surprised to find this. I have done some WASM experiments and found that JS is very often incredibly well optimised, often running faster than WASM. Do not underestimate the years of JIT optimisations that went into V8 and other JS engines.


V8 is an incredible piece of machinery, and I really wish other JIT's and languages got access to half the engineering and monetary effort that went into it.

> often running faster than WASM

This seems pretty counter to everything I've read about WASM - V8 is some black magic, but WASM should run far faster. What language was the WASM blob compiled from? Mozilla has also demonstrated that they can send the DOM to a WASM blob, have it perform the necessary transformations and return it, faster than JS could perform the same transformations itself.


I was actually playing with the raw WASM, but I checked what Rust compiled to and ran it as well (it wasn't any better). It was a simple fibonacci benchmark so perhaps not representative but it still left me very disappointed.


Same, do NOT underestimate V8 black magick + optimizations.

I've done a fair bit of benchmarking/performance experimentation with compiling Rust/Go/Zig etc to WASM (including with SIMD or parallelization enabled).

It'll be really interesting watching the progress of V8 + WASM SIMD proposal, it's been enabled behind a flag in V8 for some time now.

https://v8.dev/features/simd


Although Rust is most known for bringing safety to low-level development, it also happens to be a nice, modern programming language with things like match statements and sum types. I find myself using it frequently for things I used to do in Python, with greater productivity once a program reaches the point that I can't keep it all in my head.


Well, that's definitely a sweet-spot, but as a by-product of that, Rust is still a well-thought through language with a strong and logical type system, some very useful features for expressing domain models, etc. and a model of memory management which maps very well to WASM. So if you already know Rust, this is a short and logical step. If you don't already know Rust, I wouldn't learn it just to write web apps in - but if you do, you now have all the advantages of "same language client/server side" as well as the advantages of safety, good libraries in certain spaces, etc.


> I wouldn't learn it just to write web apps in

I'm going to be looking into the other way around: writing a web app in it to learn it. Seems like a great use case that would make it easier for me to learn, as someone who has experience with web apps and not so much with desktop software.


It's the reverse of what happened with JavaScript. In other words, JavaScript started on the front-end and then spread to the back-end because it was advantageous to have one language for both. One of those advantages is that folks who were experts in JavaScript were suddenly able to use their language of choice for back-end work too.

Rust started on the back-end but is fairly likely to spread to the front-end in the future for the same reason. One of the advantages will again be that folks who are experts in Rust will be able to use their language of choice for front-end work too.

Beyond this, I would argue that Rust's type system is a significant draw for many folks as well. I dare say it has a lot more potential than Elm, a Haskell-like language designed just for web development.


I don't think Rust is just for low-level development anymore, for example at Commure we have been using it very successfully for "enterprise app".

So if you're already using Rust for the app backend, I think the interest is more about using the same language in the back and in the front. Since nowadays one usually wants a typed Javascript (e.g. Typescript) anyway, it is far shorter of a leap to Rust.


I haven't seen it mentioned, but memory usage might also be better, and I have a good real-world example to show why it still matters in 2020: we use Jira to manage our software projects and our team's project manager uses a newish MacBook Air with 8GB of RAM. Jira is written in React and a single page for a ticket can easily eat 150+ MB of memory (it's even more for Jira boards). Our PM regularly complains about the slowness of his machine and I think Jira is a big contributing factor. I don't think much of the memory is used for presentation (so it might not be React's fault), but rather for business logic processing a large amount of data. And IIRC calling a simple `filter()` on a JS array creates a new array for the operation while Rust uses iterators.


I regularly slow to a crawl on a 32gb Mac Book Pro requiring a reboot because it starts swapping thanks to Chrome tabs, electron apps/VSCode, and webpack/cargo/whatever to the point that running cat on a file takes 5-10 seconds. All it takes is a little memory exhaustion for the kernel to pick a process for swap (iterm and its children? why, thank you kernel!) and drive it into the ground. With Chrome it might swap just a single tab process or it might swap the rendering process

I don't know why it seems React apps are some of the worst offenders, but I think it has to do with hooks injecting context components everywhere (at least that's what it looked like last time I popped open React dev tools). I'm guessing that out of tree state tracking is particularly memory intensive.


Possibility of sharing code between backend and frontend.


Yes. Isomorphic code was always about having to prove the correctness of a smaller body of code.

Web assembly will see a whole new round of people trying to do monoglot programming. Which I welcome, because I like JavaScript well enough but I don’t like writing it all day, every day, for years at a time.


Type safety is the key selling point here. Rock solid behavior for asynchronous and concurrent behavior is another one. Both could be considered weaknesses for Javascript.

I agree it's a bit too far out for a lot of javascript/typescript developers. But then the Rust community has a lot of former full stack refugees joining it. My observation is that "full stack development" using javascript is a phase developers go through before upgrading to some other language. You see similar patterns in the Go community. Both communities have lots of people who used to do lots of web and node.js development.

My money is more on languages like Kotlin, C#, and Swift crossing over to the browser. Both already have wasm compilers that are still need a lot of work. This work only kicked off for Kotlin fairly recently. Kotlin also has a decent javascript transpiler that you can use right now. Both Swift and Kotlin are very popular for mobile UI development. C# has been used extensieley for Desktop and web server development. Obviously these languages come with lots of features that make them very suitable and popular for exactly the kind of stuff people use Javascript (and Typescript) for.

I haven't done much with Swift and C# but Kotlin is great for this. Most of my experience with that language is server side but I have done a few things with kotlin-js as well as some Android stuff. As of a few months ago, the kotlin-js tooling is getting to the point where it's a very solid choice. It builds, it eliminated dead code, it runs webpack for you, etc. The upcoming version (1.4.0) is currently available as a release candidate and includes something called Dukat. Dukat generates kotlin type headers from typescript type headers for npm dependencies. So that means you can integrate a lot of existing npms if you have to. Also the build tools integrate with webpack and you can target both the node.js ecosystem and the browser ecosystem.

Most of the bottlnecks for adopting either transpilers or wasm compilers in this space is the relative immaturity (or lack off) mature alternatives to popular javascript frameworks. You can do react apps in a bunch of languages now but it just feels wrong to do it. The pattern you see in other language communities is that they pretty much start rolling their own alternative frameworks. In any case, the amount of third party code making it to a browser is typically not that much for most webapps. For all it's popularity, the react run-time code is not that large. A few hundred KB is considered a lot.


Predictability?

As far as I know, Rust's WASM performance is much more uniform than JS'.


A fast webapp is a selling point on it's own.

Of course, maybe just because it uses WASM it doesn't automatically have to be faster - but this looks promising.


I am hopeful that Rust can achieve what Elm did not.

I really fell in love with Elm early on, back when it was an experimental language for functional reactive programming that just happened to compile to JavaScript. It was an outgrowth of failed experiments in FRP from the Haskell world. I thought it got so many things right -- and it totally did. But then, just as soon as it started gaining real traction, development on Elm went silent and became siloed, staggeringly slow, locked-down, and unresponsive to users. I understand why this happened and I don't even hold it against the Elm team, but it certainly stunted the language's growth and adoption.

Rust has a much more expressive type system than Elm. The Rust world is much more open, responsive, and caring about user concerns. Rust isn't afraid to offer unsafe escape hatches even if they're not pretty or elegant. With Rust you have the added advantage of being able to use the language for the entire stack, both front- and back-end. That's especially compelling because Rust is on its way to being one of the strongest languages for back-end development due to its combination of type safety, expressiveness, and performance.


Clojure has been there as a fullstack language for a lot longer. It also fulfills the story that Elm was trying to do for much longer. And Reagent is a much needed improvement on React.

The borrow-checker in Rust is kind of silly tool to use in the context of a managed language (that does GC) like Javascript. I don't get it. It's like using a backhoe to plant a few geraniums. Am I just not getting this?


This is a complicated question, and ends up different for each individual. For me, I don't see the borrow checker as being more silly than GC, just an alternative, and one that speeds up my development process, not slows it down. I am also, of course, incredibly biased.

Rust also has many, many features that are not the borrow checker. Some people prefer Rust because of those features, in spite of the borrow checker.

https://without.boats/blog/notes-on-a-smaller-rust/ is also one of my favorite bits of writing on this topic.


That is a great link. The point at the end about making the language embeddable is actually one of the things I love about Rust the most. Rust is aggressively cross platform and I appreciate that a lot. Write a parser in Rust once, run it everywhere.

I’d use Swift a lot more (which has some of the features mentioned in the article) if the resulting code wasn’t limited to Apple OSes (the Linux support is crap). Or perhaps Kotlin if it wasn’t limited to the JVM, etc.


Kotlin isn't limited to the JVM thanks to KotlinJs and Kotlin native


Golang


Wow...there is really really good advice in that post. I was considering making my side project language into a simpler Rust and that post just gave some great suggestions. Especially since I’m writing the compiler in Rust to WASM.


That's a good link there. I think the answer to a smaller Rust is Go, however. The comparative compiler speed alone is worth the price of entry. Don't @ me ;)


Smaller Rust is more like OCaml than Go IMO. Go has an entirely different design philosophy.


Go may be smaller than Rust, but I really don't think it is a smaller Rust. There are too many differences in the philosophy, and they make Go just some other language.


Go is so much smaller its no longer anything like a smaller ruts, though.

Or, is Go bigger because the GC represents a ton of complexity?


I don't see clojure in the same realm here. Both rust and elm are geared more toward enforcing correctness through their type systems to tame complexity in large projects. In my experience, clojure, being dynamic, fits more as a comparison to vanilla JavaScript. In a langauge-to-language comparison, I think clojure is somewhat more appealing than JavaScript due to immutability by default and general functional niceties. However, the fullstack comparison is not quite apples-to-apples as clojure (backend) is JVM and clojurescript (frontend) is node. While it works for some people, clojure feels awkward to me as a fullstack language.


> clojure (backend) is JVM and clojurescript (frontend) is node.

Node is a backend JS implementation. ClojureScript is compile-to-JS and can run on the backend in Node or on the frontend in browsers.


Thanks, that's correct. I often make the mistake of interchanging node for browser js on accident...


Types provide only the simplest most trivial kinds of correctness, though. And people routinely make mistakes with types.


Things like null safety aren't trivial for practical purposes IME.


Most Optional implementations are kinda terrible, though. They cost more than they save. If I have a function with a param and I release it with it required, but later I change it to be optional (loosening a requirement) does rust require everyone calling that function the old way to change their code? If so that not an improvement!


You're basically just regurgitating a recent Rich Hickey talk. Interested readers can probably find it on YouTube.

Rich is certainly a brighter individual than me, but some of his points are either him missing the point or being intentionally misleading.

For example, he discusses how Either types aren't true sum types because Either<A, B> isn't the same type as Either<B, A>. So he disparages people who say that Rust/Scala/Whatever have sum types. He's missing the point because 1) All Either implementations I've seen have the ability to swap the arguments to match another Either with the types backwards, so it's a sum type in practice, and 2) Clojure has none of it, so why criticize the typed languages by saying their type systems aren't perfect when your language's type system isn't helpful at all? Throw the baby out with the bath water?

To your specific point (which is also one of Hickey's), yes, it does kind of stink that loosening a requirement forces consumers to update their code. However, that minor downside does not mean that Optional is "not an improvement". It's still a HUGE improvement over Java's absurd handling of null (IIRC, Clojure is the same as Java there).

Also, maybe changing something to optional isn't really "loosening" the requirements. It's just changing the requirement. If the parameter changed to optional, don't you want to be alerted to that? Why is it optional now? What will it do if I pass None/null? Maybe I actually would prefer that to the way I called the old version.

It just never struck me as offensive to have to change my code when I upgrade a dependency. I have trouble sympathizing with that mindset.

Edit: And what is the Clojure alternative? You can loosen requirements, but really, you never had enforceable requirements anyway. Is it apples to apples to talk about a typed language loosening its contract?


> "Either types aren't true sum types because Either<A, B> isn't the same type as Either<B, A>."

that's such a weird argument! did he also complain that tuples aren't true product types because (A, B) isn't the same as (B, A) ? why would they be the same, and not just isomorphic?


I haven't watched the talk recently, but my feeling at the time was that he was just being pedantic about the definition of a sum type. Kotlin's nullable types would be example of true sum types because they are symmetric. But you can only make a sum of `T + null` and not a more generic `T + U`.

His real point, I believe, was that the `Either` implementations weren't as good as true sum types because of ergonomics. It's part of his philosophy/bias that type systems get in the way and therefore cause more harm than good.

I don't really grok his point most of the time. It just feels foreign to me to not want as strong a type system as possible. But a lot of really smart guys feel that way: Him, Alan Kay, etc. I suspect that they're able to track much more stuff in their heads at a time than I am.


The point is Hickey brings up important points about language design as its experienced by devs actually using the language. Hardly anyone discusses this. Furthermore, you seem to be making my argument for me when you claim that Clojure doesn't have types so why complain about types. In Clojure you could write a type system to do all that, probably in a a dozen hours (the language is programmable after all), but it would be an academic exercise to most which is the point Rich is trying to make when he disparages other type systems.


I think its worth noting that in Rust you can create a function that can take either Option or the value itself, if that is what you really want to do:

https://gist.github.com/rust-play/b28257cd7c48d0f9e9b1893181...


That is nice


If you change a type from T to Option<T>, yes, all the other code has to change to take the option into account.


Yeah that's not good design. Lowering a requirement should not make callers complying to a stricter one have to change anything. But, ohhh.. right only the type changed. so everybody stop what you're doing and start over.


I strongly disagree, but this is why it's great we have a ton of languages! To me, forcing you to handle it is the exact point of an option type.

The type wasn't the only thing that changed, the possible values have changed. You may be getting more values than you were previously. This means that some of your assumptions may be incorrect. Of course your code needs to change.


I might be misunderstanding, but I think you are talking about slightly different points here. It seems to me that the critique of an explicit Option type (that acts sort of like a box, in contrast to Kotlin's T? vs T) applies to when you pass in the Option as a function parameter to a function that previously expected it to always be T instead of Option<T>. In that case you as a caller are never "getting more values than you were previously", but you can now certainly pass in more values than you could before.

Forcing callers to refactor their calls to use a new Option<T> type as a parameter simply amounts to a change in the type signature, but since the function is more liberal than before, it cannot break your assumptions (at least not assumptions based on the function type signature).

(For what it's worth, I do find Kotlin's T? to be more elegant than the Haskell/Rust-style Option/Some type. But then again, Kotlin is not fully sound, so there's that. Dart's implementation of T? will be fully sound though, so there are definitely examples of languages going that route.)


That is true! You're right that the perspective can be different.

You could write <T: Into<Option<i32>> if you wanted, and your callers wont change.

Frankly, using options as parameters is just not generally good design, so this issue doesn't really come up very often, in my experience. There are exceptions, but it's exceedingly rare.


Frankly, using options as parameters is just not generally good design

That's true. Even in annotated Python, I want to ensure all parameters are set before calling my functions/methods. Saves a lot of complications.


Ah, I see where the misunderstanding is. You can make it so you change only the function signature and behaviour or you can make it so you have to also change the function call site.

Ever since https://github.com/rust-lang/rust/pull/34828 you can transform any `f` that takes a `T` into an `f` that takes an `Option<T>` without any of the call sites changing.

For instance, look at this playground https://play.rust-lang.org/?version=stable&mode=debug&editio...

Your function `get_string_raw` which just handles `i32` can be transformed into a function `get_string` which handles `Option<i32>` without the thing calling changing how it calls the function. And the new `get_string` can accept `Some(i32)` or `None` or just `i32`.

Of course, this is slightly broad for brevity: you can now pass in anything that can become an `Option<i32>` but you can just define a trait to restrict that if you wanted.

You can get that sort of covariant effort that you wanted.


Well, yes, of course. The thing you could previously rely on being present can no longer be guaranteed to be present - that _should_ require code in the calling function to change.


Not if I loosened the requirement. Stricter adherents shouldn't have to change anything. This is poor language ergonomics.


To be clear, you're talking about the function signature changing from

    fn(a: int)
to

    fn(a: Option<int>)
?

Technically, yes, all callers would have to update, but practically, you'd just define

    fn(a: int) { fn_2(Some(a)) }
to avoid the breakage. That is, you're essentially telling the compiler how to loosen the requirements. Ergonomically, this seems rather fine. Especially if this means you gain (some) protections from the much more problematic case of restricting the requirements.


There is also the Option of accepting Into<Option<T>>, which does cover both variants and is completely backwards compatible.


does rust require everyone calling that function the old way to change their code? If so that not an improvement!

Can you share why is that bad? the compiler will tell you exactly where you need to make the changes.


Poor ergonomics: I loosened a requirement. Nobody should have to change their code. Kotlin does this right.


How often do you believe this really happens in practice? And does that truly outweigh the benefit of being able to define a precise contract on your APIs?

How many times have you written a function and a version later said "Oh, wait. I guess I don't actually need that required Foo parameter! I used to, but now I don't!"


> How often do you believe this really happens in practice?

Regularly if you're doing refactoring of code. Otherwise code becomes unchangeable because it's too big of a burden once it's clear it needs to change.

> And does that truly outweigh the benefit of being able to define a precise contract on your APIs?

I would point you to the XML standards which allowed people to do exactly that, and instead JSON won.


> Regularly if you're doing refactoring of code.

Are we talking about a published library or your internal-only code? If the former, I sympathize with the argument that relaxing a requirement should not force consumers to change their code. If the latter, then I find it much harder to sympathize. You're already refactoring your code, what is a few more trivial syntactic changes? You could almost do it with `sed`.

> I would point you to the XML standards which allowed people to do exactly that, and instead JSON won.

You know- this is an interesting point. And I guess I'm consistent because I absolutely hate JSON. I've only had to work with XML APIs very few times, but every time, it was perfectly fine! I could test my output against a DTD spec automatically and see if I did it right. It was great. JSON has JSON Schema, but I haven't bumped into in the wild at all. So it seems like "we" have definitely chosen to reject precision for... easy to read, I guess?


You might really enjoy going and reading about CORBA and SOAP -- two protocols that have tight contracts. I'm sure you can still find java/javascript libs that will support both. And if you really really want, you can put them into production -- CORBA like it's 1999 while singing to the spice girls.

And what you'll find is that the tighter the contract, the more miserable the change you have to make when it changes. It's one thing if it's in one code base, it's another if it affects 10,000 systems.


I'll admit that I've never deployed a service with 10,000+ clients.

And CORBA (after looking it up) seems to include behavior (or allow it, anyway) in the messages. That's about much more than having a precise/tight contract on what you're sending. It's much more burdensome to ask someone to implement so much logic in order to communicate with you. I'm fine with the contracts only being about data messages.

SOAP is closer to what I'm talking to. Or even just regular REST with XML instead of JSON.

I'm asking genuinely, how would life be worse between a REST + XML and a REST + JSON implementation of some service? In either case, tightening a contract will cause clients to have to firm up their requests. In either case, loosening requirements (making a field optional, for example) would not require changes in clients, AFAIK.

The only difference that I see is that one can write JSON by hand. And that's fine for exploring an API via REPL, but you surely don't craft a bunch of `"{ \"foo\": 3 }"` in your code. You use libraries for both.

It just seems insane that we don't have basic stuff in JSON like "array of things that are the same shape".


> And CORBA (after looking it up) seems to include behavior (or allow it, anyway) in the messages. That's about much more than having a precise/tight contract on what you're sending.

The IDL (interface description language) for CORBA is a contract. It defines exactly what can or can't be done. It's effectively a DTD for a remote procedure call, including input and output data. (Yes it can do more than that, but realistically nobody ever used those features)

A WSDL for SOAP is similar. CORBA is basically a compressed "proprietary" bitstream. SOAP is XML at it's core with HTTP calls.

> I'm asking genuinely, how would life be worse between a REST + XML and a REST + JSON implementation of some service?

So REST+XML vs REST+JSON alone (no DTD/XSD/schema) would be very similar -- other than the typical XML vs JSON issues. (XML has two ways to encapsulate data -- inside tags as attributes and between tags. Also arrays in XML are just repeated tags. In JSON they are square brackets []).

But lets say you need to change the terms of that contract (new feature usually), will code changes be required on client systems?

* If you used a code generator in CORBA with IDL the answer is yes, there will be code changes required.

* If you used a WSDL and added a new HTTP endpoint, the answer was no. If you added a new field to an existing endpoint, the answer was yes. (See [2])

* If you used a DTD/XSD, the answer is usually yes, since new fields will fail DTD validation using an old DTD -- that is if you validate all your data upon receipt before you process it.

And this was fine for services that didn't change frequently or smallish deployments.

In large systems, schema version proliferation became a nightmare. Interop between systems became a pain of never ending schema updates and deployments, hoping that you weren't going to break client systems. And orchestrating deployments across systems were painful. Basically everything had to go down at once to update -- that's a problem for banks, say.

What's sad to me is that was well known back in 1975. [1] When SOAP was developed around 2000 they violated most aspects of this principle.

> but you surely don't craft a bunch of `"{ \"foo\": 3 }"` in your code. You use libraries for both.

In python, JSON+REST is:

     resp = requests.post(url, data={"field":"value"})
What I find really appealing in REST+JSON is that validation just happens on the server side, and that's usually good enough. Sure there's swagger, but that's a doc to code against on the client side.

I don't feel that schemas and the need for tight contracts are all bad. I think if your data is very complex, a schema becomes more necessary than not when documents are bigger than a 1MB, say. I also think it's fine if your schema changes rarely. And yeah, if you need a schema for tight validation, JSON kinda sucks.

But that's the question, do you really need tight validation, and therefore coupling, or is server-side validation good enough? And in most cases people tend to agree with that.

[0] https://en.wikipedia.org/wiki/Service-oriented_architecture

[1] The Practical Guide To Structured System Design (1st ed.), Page-Jones, Yourdon Press, (c) 1980, pp103, footnote at bottom of page.

[2] https://www.w3.org/TR/wsdl.html#_wsdl


> If you used a DTD/XSD, the answer is usually yes, since new fields will fail DTD validation using an old DTD -- that is if you validate all your data upon receipt before you process it.

I'm not sure I follow. DTD, as far as I know, allows both optional elements as well as attributes. If you add a feature, a client with the old version should continue to work correctly if you add optional elements. If they are NOT optional, then the client will fail regardless of whether you did XML+DTD or JSON, because your API needs that data and it simply wont be there.

What am I misunderstanding?

> What I find really appealing in REST+JSON is that validation just happens on the server side, and that's usually good enough. Sure there's swagger, but that's a doc to code against on the client side.

As a client, you don't have to validate your request before you send it. But it's nice (and probably preferable) that you can.

>In python, JSON+REST is:

> resp = requests.post(url, data={"field":"value"})

requests is not built-in to Python, right? So you are still using a library to JSONify your data. If you were to use urllib, then you'd have to take extra steps to put JSON in the body: https://stackoverflow.com/questions/3290522/urllib2-and-json

What's more, you still are not crafting the JSON yourself if you call json.dumps on a dictionary.

But, yes, crafting a dictionary with no typing or anything is still many fewer keystrokes than crafting an XML doc would be, even with an ergonomic library. But again, how much are you doing what you typed in your real code? That looks more like something I'd do at the REPL.


> If you add a feature, a client with the old version should continue to work correctly if you add optional elements. If they are NOT optional, then the client will fail regardless of whether you did XML+DTD or JSON, because your API needs that data and it simply wont be there.

Sure but, that begs the question, How is that better than JSON exactly? Maybe strong typing? And why isn't just sending a 400 Bad Request enough if the server fails validation?

I mean you could say well, "I know the data is valid before I sent it". But you still don't know if it works until you do some integration testing against the server -- something you'd have to do with JSON, anyway. XML is only about syntax, not semantics.

From what I've seen, XSD's tend to promote the use of complex structures, nested, repeating, special attributes and elements. And if you give a dev a feature, s/he will use it. "Sure, boss, we can keep 10 versions of 10 different messages for our API in one XSD" But should you?

JSON seems to do the opposite, it forces people to think in terms of data in terms of smaller chunks say. Yes you can make large JSON API's that hold tons of nested structures, but they get unwieldly quickly. And most devs would just break that up in different API's, since it's easier to test a few smaller messages than one large message.

> As a client, you don't have to validate your request before you send it. But it's nice (and probably preferable) that you can.

If you unit test your code, good unit tests serve as validation -- something you should be doing anyway. If you fail validation on your send, you have a bug anyway -- it's just you didn't get a 400 Bad Request message from the server. But to the user/dev, it's still a bug on the client side.

> requests is not built-in to Python, right?

Yes. But there's a lot of stuff not in the standard library that should be. The point is normal day to day code can be just a one-liner using native python data types.

> What's more, you still are not crafting the JSON yourself if you call json.dumps on a dictionary.

Sure, maybe a technicality here. If I type this, is it python or JSON?

    { "field": [ 1, 2, 3 ]}
Well, the answer is that both will parse it. json.dumps() just converts it to a string. No offense here, but I see it as a distinction without a difference.


> Kotlin does this right.

Typescript as well :-)


I think simple, trivial type systems offer simple, trivial correctness. Robust type systems can offer robust correctness (when used to their potential).


Types still exist in clojure of course, you just have to spend time reasoning about what they may be in any given place. The longer I've worked with clojure, the less I've understood this argument.


Writing your own type system in a Lisp is an afternoon task, but nobody does that because...


In addition to the benefits listed in the other comments, exhaustive checks in sum types are useful for catching unhandled cases, especially when refactoring.


>The borrow-checker in Rust is kind of silly tool to use in the context of a managed language

I suspect a lot people are drawn to Rust more because of its type system and great tooling. I'd be perfectly happy with a version of Rust that swapped out the borrow checker for a GC, but such a language doesn't exist today, and Rust does exist.


I thought that F#, OCaml, Standard ML, Swift were such languages.


Yeah the borrow checker is an interesting solution to a problem I don't have with a managed language. It's the main "safety" feature people coming to rust are introduced. I'm in agreement about Rust here with a lot: "one does not let one's friends skip leg day" there's a lot more to good language design than memory management. Is it worth it?


One of the things I like about non-managed languages is the ability to have true destructors. Releasing resources is awkward at best in Java, et al.


Note that you can have linear types (which is what gives you what you're talking about) in managed languages. You can have them as an extension in Haskell for exemple.

Rust is the only mainstream-ish language to really use them, though.


Rust uses a mixture of affine and ... "regular"? types. My understanding is that affine is a looser version of linear because the type doesn't have to be consumed.

You can have dtors in non-affine types (types that implement the Copy trait) in Rust as well. I'm really only talking about C++ style destructors. Those don't require linear or affine types. But, I agree, that in a managed language, having a linear type is one way to get predictable destructors to run.

Strangely, Swift has deinit{} for its class types (ref-counted), but not for its struct types (value types).


You cannot implement Drop for a Copy type.


Fair enough! I never wanted to implement Drop on a Copy type, but I assumed you could.


> "one does not let one's friends skip leg day"

That is such a silly meme. It's silly because it suggests that Rust is a one-trick pony, or that it only focuses on memory safety, which isn't true in the least.

Besides, the borrow checker has at least one other major upside: Show me another mainstream language in which safe and efficient shared-memory concurrency and parallelism is as easy as in Rust.


In many ways the borrow checker is a tool for enforcing safe mutation of values. As a side effect it happens to prevent entire classes of memory bugs, but it also prevents many bugs that aren't related to memory safety. Historically that side effect was the motivation for designing and implementing it, but in practice it's useful as a general guide for writing rigorous and correct software. As a result, I don't think the borrow checker as silly in this context as you're suggesting.


Rust is not running in the context of JavaScript. It's running on WebAssembly, which has no garbage collector.



One nice thing the borrow checker enables, besides GC stuff: You can guarantee that you never keep a reference to the thing a Mutex is protecting, after you unlock the Mutex.


Replying to my own comment, and very much related to that, another is thread safety. In most languages, if I have a type with some method that e.g. increments a member variable, that method is non-thread-safe by default. (Modifying internal data structures can be even worse of course, depending on the language and the data structure.) In Rust, such a type is trivially thread safe by default, because the languages just won't let you modify it from multiple threads without a Mutex or similar. When you want types that can be shared and modified without locks, the author of the type has to take steps to implement this (using atomics, lock-free data structures, etc). That means that there's generally no need to trawl through docs or source code to find out whether a method is thread-safe or not. The type itself knows.


For anyone curious about Clojure, I'd recommend checking out the CircleCI frontend [0] for an example of a production ClojureScript app.

[0] https://github.com/circleci/frontend


One of the problems with Clojure is the same as with JS: it lacks a static typing system. It lets through a ton of bugs which the compiler normally catches, both trivial and nontrivial.

Yes, Typed Clojure addresses this problem somehow, as does Typescript.

But Elm, Rust, and the language formerly known as BuckleScript address the problem in a nicer and better-integrated way, to my mind.


I don't think anyone who uses Clojure in anger would put it on the same level as JavaScript when it comes to letting through a ton of runtime errors. The integrated tooling around editors, clojure.spec and the REPL is very good at curtailing the downsides of dynamic typing.


There is no javascript in a rust web application.


[flagged]


A lot of ad-hominem and not a lot of substance in your comment.

>due to some misguided agenda that it’s safer (code is data!!!)

What does that even mean? Rust is a lot safer than C or C++, although of course here we're talking about effectively replacing JS so it's a different discussion.

I would personally love to write front-end code in Rust, not because it's necessarily the most appropriate language for such a task but because I'm familiar with it and can reuse what I already know. Basically the same justification people have used to bring JS outside of the browser.


> What does that even mean?

Turing machines are not safe, code is data and it has no way of telling the difference. For instance, if I were to compile valid rust code, then open the binary in a hex editor and start changing the it rust can / will do nothing to stop it and so long as the code is valid it will run.


Clojure is nice for heavy data engineering projects that require robust/stable/mature tech like the JVM. But for a full stack language for a web app it just adds a tons of complexity over just using JS.

In Clojurescript interacting with the JS ecosystem is painful cause of its reliance on the closure compiler.

In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

So while Clojure is a better/nicer language than JS, the tradeoffs are not worth it if you want only one language for your webapp (SPA and server).


> adds a tons of complexity over just using JS.

What "tons complexity" are you talking about? Clojure is a much simpler language (in the decomplected sense) than JS. Less syntax, better build tools, uniform stdlib, no webpack/babel nonsense. This sentence makes no sense.

> interacting with the JS ecosystem is painful cause of its reliance on the closure compiler.

Again, what? With tools like shadow-cljs, requiring JS libs and using them in you project is trivial (just require and import like you would any cljs library).

> In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

I've been writing Clojure for 10 years and rarely have I had to reach for Java. This is absolute rubbish.

You seem to hold strong opinions about a language you barely understand.

Edit: reading you comment history, you seem to have an axe to grind with Clojure.


It is a valid point I don't know why the downvotes.

I think Rich had to make a decision, writing a completely new LISP from scratch or leverage an ecosystem and create a LISP on the top.

>> In Clojure, is almost the same, most Java libs are over-engineered and horrible to use but you need to reach for them because Clojure lacks an ecosystem.

The biggest problem with Java libs the mixed quality. Many big data project have this problem, if you peek under the hood you gonna be amazed about abstractions leaking into different bits and pieces of the system. My favourite example how ORC imports Hadoop FS

https://github.com/apache/orc/blob/master/java/core/src/java...

There is one more problem with Clojure that I find annoying, the actual Java interop. I was running into issues with this many times and some Jave libs are almost unusable without a thin wrapper written in pure Java.

Other than that, I think Clojure is still one of the best options out there.


What you see as negatives, I see as Clojure's main selling points.

I usually avoid languages that keep reinventing the wheel of established libraries on the platform, "'cause it isn't idiomatic".


Don't disagree with you but two things:

1. Clojure doesn't have as good interop as Kotlin.

2. For some heavy interop projects, it's simpler to write the thing Java.


What good interop does Kotlin have better than Clojure in what concerns calling Java code?

And on the other direction is just as fun, try to call Kotlin co-routines from Java without having to write wrappers for it, or Kotlin code without putting @Jvm... annotations everywhere.


It’s been years since I used clojurescript so ymmv, but imo integrating so tightly with the closure compiler was a big mistake. It adds tons of futziness and build complexity for what, in practice, are usually pretty marginal reductions in bundle size that end users won’t notice.


Never had a problem with the closure compiler vis-a-vis ClojureScript, is this really a thing?


I prefer to use languages in the domains they were designed for, and I think it's almost impossible to a general purpose language, or a system programming language, to be better in a domain-specific language that was designed with the right trade offs in mind. Elm having less abstraction-power and doesn't offering full interop with JS, was a design decision. For this reason, we have lots of high quality packages that makes sense to Elm, instead of bindings of JS libs. For the other side, I can't see any language being better at system programming than Rust.


> “ I prefer to use languages in the domains they were designed for”

So for web development, do you use PHP?

Because neither Python, nor Ruby nor even JavaScript itself was designed for web development.


JavaScript was definitely designed for web development. Though, at the time web development meant a tiny bit of magic in an otherwise static html document.


I wouldn’t call “animating a piece of text” on a HTML page as “web development.

16 bit home computers could do this better 10 years before JS.


The crucial distinguishing factor is that JS was made to animate a bit of text... on every device in existence, past present or future. That's the biggest difference between the web and native.


The "every device in existence" fails with node, at example, which has a different standard library, and every browser support libraries and even syntax differently (which is why we backport stuff with tools like babel)


I can only guess but, I think the parent means web development in the context of an HTTP server. (GET/POST/etc).


Most of Ruby's life has been dedicated to web development.

The original use-case at birth of a language matters less, as programming languages are living things that evolve and mature, sometimes morphing into very different things than they were at the beginning.

Likewise Rust's current lifespan has almost entirely been dedicated to taking on C/C++ and their use-cases in systems programming and server applications. That's all that really matters here.


No, because PHP was not designed at all, it was just a hack that solved some problems and started to evolve. Anyway, web development in '95 is not the same thing as today.


Your statement would pretty much also be true if you substituted Javascript for PHP.


Language design does not end with the release of the first version.


> nor even JavaScript itself was designed for web development

Javascript was designed for web development of the 90s. It evolved along with web development.


> I can't see any language being better at system programming than Rust.

Ada/SPARK, but it won't get a standing ovation from current generation of devs adopting Rust.

Luckily it gets some headlines exposure via NVidia and Genode OS adoption.


I actually see Ada/SPARK referenced pretty frequently in discussion with other Rust programmers, and I've been to at least one Rust talk in which learning from Ada was the main topic.

Anyway, it's hard to blame people for looking elsewhere when the best Ada compilers were proprietary for so long.


I not so much, beyond some occasional assertions of Rust being responsible for features that Ada already had them first.

As for compilers being expensive, while true, I learned Ada via books a couple of years before being able to put my hands on a compiler.

Then again, maybe that is no longer fashionable way of learning.


Ada seems very nice, but compared to rust I think it lacks momentum and (some) ergonomics. It's sad, but a C-like syntax on top of the Ada semantics would possibly be more popular.


I re-ally struggle to see the appeal of Rust for frontend web development over typescript.

Perhaps there are some rare scenarios where you need to eek out every little bit of performance... But in normal circumstances, typescript offers a fantastic combination of familiarity, expressiveness and ... if not type-safety, then at least some degree of type-sanity.


I would choose Rust over TS not for any performance characteristics, but because I strongly prefer it as a language, due in large part to features like pattern matching, sum types, exhaustiveness checking, traits, etc.

I do quite like TS, but you can never really get away from the fact that you’re still limited by many historical JavaScript gotchas, and the type system is nowhere near as powerful as Rust’s.

Edit: That being said, as a business decision, TS will often be a better pick because it’s so much easier to find talent, and the learning curve is much smaller.


Sum types and exhaustiveness checking are doable with typescript! Though it's not especially ergonomic

https://www.typescriptlang.org/play?#code/C4TwDgpgBAygrgWygX...


There are major benefits to using the same language on backend and frontend though, especially with typed languages where you can share types. In my experience, this makes a much bigger difference than the specific language you choose.

So the real question imo should be which is better considering both frontend and backend. I suppose it’s also a very project-specific question. Maybe a heavy frontend project = ts, while heavy backend = rust.


Typescript's type system actually pretty limited. I spent about two weeks trying to do something nice with it before giving up and trying Scala.js, where I was able to do everything I wanted within a couple of days.


How long ago was that?

Typescript gets better all the time, I didn't the last year or so run into anything I couldn't do in Typescript (although using it all the time, and Scala too)


I think it was last year. There was no nominal typing at all, the unsound variance of arrays meant I couldn't really trust my types anyway, I wasn't able to treat structures generically as records the way I'd like (i.e. no shapeless equivalent) and of course there was no HKT.


Ok. I probably don't use so very advanced parts of the type systems (didn't know about Scala Shapeless until now) so I might not notice the differences.


Rust wins in familiarity... if you're already familiar with Rust!

If you have a Rust project that needs frontend code it's nice to be able to use the same language everywhere.


As a longtime user of both, Rust has a long way to go on the frontend before I would consider switching.


For anyone looking for an Elm like experience but without the limitation imposed by its creator, take a look at Elmish, a port of the Elm architecture in F#.

Advantages: You can use F# on the front and backend. Pretty good js interop with a lot of opportunities to shoot yourself in the foot.

Disadvantages: You will probably need to use the Elm docs/tutorials to learn about Elmish.


> It was an outgrowth of failed experiments in FRP from the Haskell world

I'd like to hear more about this failed experiment. Reflex is alive and active and perhaps is the best tool out there for fullstack applications.


As someone who uses Reflex at work and has also written a bunch of Rust, I wish Rust the best but am highly skeptical it's going to be a productive tool, and not just nerd-fodder, for user interfaces.

Reflex is distilled mutation and week reference black-magic, and the safe interface it provides relies heavily on higher kinded types. Even if someone goes though all the trouble to implement it or something like it (And that would be cool!), I don't think the resulting algorithms can be packaged up in nice abstractions.

Rust is a great language, but it just shouldn't bother competing where GC will do. It's just no point trying to win a race with that handicap.

------

I do believe in "one language for all tasks", actually. It's just that one language will have split ecosystems as it will support many idioms that don't interoperate well. Put another way, let's start with "multiple languages, perfect type-safe FFI", and then go for endosymbiosis.


Why not have tools that specialize and are strong in some areas instead of trying to have a Swiss Army Knife language? In other professions it's normal to have different tools for different tasks. Or is there something I'm missing?

I personally would be happy with many different languages that are focused on front end development, like how we already have many that are focused on back end development.


I noticed Elm is listed as an inspiration by a number of Rust front-end frameworks (seed and iced for instance). It seems a lot of people feel the same in the Rust community.


I don't understand why Rust is more compelling than ReasonML (OCaml).

Reason is more expressive and on the web low level memory management isn't a concern.


Isomorphic codebases. I would much rather use Rust on the back-end than ReasonML/OCaml.


Why's that? Ocaml has excellent performance and is a lot higher level than Rust. I personally see Rust as a language that really only should be used if utmost performance or hard latency requirements makes GC'd languages unsuitable. Most backends for web-app would run only marginally better with Rust than Ocaml and incur a steep productivity and maintenance penalty.

A lot of people seem to be suggesting Rust as a GC language replacement instead of a C/C++/asm replacement, which I've never really got. Is the mutation/lifetime/pointer management really worth it for this kind of thing?


I think it stems from the root cause that many equate GC languages with heap allocation everywhere, and never got to learn properly how to do value allocations on the languages that offer language or library features to do such allocations.


I was wondering about that too. OCaml has a really good track record and the whole Bucklescript ecosystem also proven to be good for reliable frontend development.


> I am hopeful that Rust can achieve what Elm did not.

I am hopeful we can see the web ecosystem evolve into being capable of 'Single Language Web Applications' like Rust seems to be capable of, and C# as well. Of course there's Flutter, but it seems to just pain on a canvas instead of reusing the DOM.


Im not sure Rust provides a lot of value for web development. Instead of Rust, use something like OCaml which has a nice front and backend story. It's certainly going to be a lot less work than using Rust. And more functional to boot.


It's worth noting that the WASM package takes up 476kb shipped and the wasm.js loader is another ~25kb. So this bare-bones site is already sitting at > 500kb of JavaScript shipped to the client.

I am glad to see things like this are possible now with WASM, but there is a lot of room for improvement here that I hope gets captured by browser vendors over time.


Just to clarify, 476kb of WebAssembly != 476kb of JavaScript. Still, your point stands in that wasm-based sites will likely never beat their pure JavaScript counterparts in terms of data transfer; you'll need to make up the increased cost of wasm somewhere else. One place that browsers attempt to make up the difference is in the use of streaming compilers, where the browser can start compiling wasm as it's coming in from the network, whereas with JavaScript browsers have to wait until the entire resource is delivered before the parsing stage can begin.


To expound on this difference: 476KB of WebAssembly loads like 476KB of images (that is, super fast) rather than like 476KB of JavaScript (which is pretty slow and blocks the main thread). https://hacks.mozilla.org/2018/01/making-webassembly-even-fa... explains more.


I could build his example in 2-3kb of vanilla JS unminified :)


Thank you for the explanation - I didn't know this about WebAssembly! I followed a link from another reply here and learned something new today.


luckily WebAssembly is stream compiled while data is being fetched in parallel; unlike JS it does not need to load the whole file to start working on it :)


I like how this post uses diffs to demonstrate how the code changes at each step, it's much easier to follow along than with blog posts that just show the complete code at each step and leave mental diffing as an exercise for the reader.


Dug in the source, I believe they're using Prism's Diff Highlight plugin:

https://prismjs.com/plugins/diff-highlight/


We're finally getting back to this point where you can choose whatever language you want to program for the web. It only took us 15 years of being flamed every time you pointed out you didn't like Javascript but were forced to use it to be part of the modern web. One of the motivations for Viaweb was being able to do Lisp, hopefully this gets the web back to that mindset of only the behavior mattering and whatever language you want to use go for it.


It's ironic because this is only possible due to the continued development of sophisticated web tooling, which is the biggest complaint made by people who complain about the state of web development.


The tools are sophisticated but also much harder to use. Things like modern front end JavaScript and Kubernetes are very complex beasts to tame. Their value is dubious for many use cases people choose them for, but they are undeniably powerful tools that advance the state of art by a lot. Same with NoSQL. It became hyped and overused but some really powerful and easier to use technology eventually emerged from it and gave us things like Redis, Redis now is a part of most modern web app stacks. I still just use Django and a lot of server side HTML templates. Developers often have a hard time knowing when to use a given technology and often end up bringing tech into their stack they don’t need. This plays out in 5-8 year cycles. Meanwhile RDBMS and simple things like Django with minimal front end JavaScript can still solve the vast majority of problems people have in the web app space. It’s the classic hype/adoption/cynicism cycle we have had forever.


This just reads to me like "my personal preferences are good and things I don't like are hyped up and of dubious merit".

If it were me, I'd prefer rails to django any day of the week, that's a personal preference, but if we're talking about building an SPA you're typically going to be building a worse solution if you can't SSR your client views, which you need a js backend to do.


I've never been fully confident in knowing what SSR refers to -- but its just the creation of static html on the server, right? As opposed to shipping a bunch of async dom updates contained in a js script executing on load, client side

But if I have it right, I have no idea why a JS backend would be required to do SSR; this is exactly the same domain of work php, rails, django have always done, without node.js needing to enter. For an SPA (which I believe is just a bunch of js fetching data from JSON apis and rendering client-side), you'd need JS to handle client-side rendering, but if you can replace that with WASM (when it has DOM support)... I don't see why you'd need JS anywhere

FYI I'm defining everything because I'm pretty sure there's a mistake somewhere in my understanding of the problem


Nah it's used in two related, but kinda confusing ways, it's not just you:

Traditionally, it means rendering your HTML on the server. Exactly what you're saying.

It also refers to a specific technique that's used to implement rendering your HTML on the server by effectively running your client side app server side, then shipping its output. This requires some work to get right, and so it's presented as a feature of client-side libraries/frameworks.


Ah

The actual client code is being executed.

Wait thats just weird -- if you're effectively removing the work from the client to the server anyways, such that its only executing on the server... I don't see what you've achieved beyond the original writing of standard language-agnostic SSR -- that you can design everything client-side, and conveniently migrate the heavier work serverside without interruption/rewrite?


It gives you the best of both worlds - fast initial load, accessible, works without JS. But then also get all the client side goodness - highly interactive views, animations, and far less bytes transferred for subsequent page loads. Plus, writing a complex frontend is simply a far better experience in something like React than in Ruby, Python, PHP, etc.

Finally, it lets you also render them to React Native using a lot of shared code. And you can share code between your backend APIs, etc.

To be honest, I’ll take Typescript over anything, even if it wasn’t the default for the web. It strikes the perfect balance of flexibility, concision, safety, debugging ux, and has a huge ecosystem to boot. I think client side JS these days if anything is underrated and the WASM hype won’t change much - if you want a lightweight, accessible app then doing a SPA in JS (with SSR) is actually not even a compromise, it’s truly superior to any alternative. Now, with a big caveat: you need to invest to get it all set up properly. No one has really “railsed” it yet, as far as I can tell.


> such that its only executing on the server

It's not, it can run on both. You render on the server for the initial page load, and then on the client for every page after.

Or not! The point is, now that you've unlocked both, you can use either one, in the way that works best for your application, in whatever ratio makes sense.


A significant consideration here is SEO and crawlability. Rendering the page server-side will serve a page with content even without JavaScript running on the client, yet for clients that do have JavaScript they still get the enhancements of an interactive JavaScript app.


> I've never been fully confident in knowing what SSR refers to

https://developers.google.com/web/updates/2019/02/rendering-... is a great resource on the subject.

You can be fully Server-side (regular php), you can do full JS client-side rendering, and you can have hybrids that mix both.


SSR typically refers to server side rendering of a single page app. I.e. render the HTML for a given route of an SPA. For that to work, you need to be able to execute your SPA code on the server which will be JS. You are right that WASM will change all of this, but the ecosystem is not mature yet.


To some extent, that is true regarding my personal preferences, but being in tech and writing code for over 20 years teaches you a few things. Developers should be more picky about tools they adopt without becoming too cynical or dismissive of new technology. Every so often you have to go through your technology shed and be sure you aren't being biased towards existing comfortable tools. It's nice to work out of your comfort zone and learn new things, but it's also nice to be very efficient at problem solving. When I feel a new tool or technology crosses a threshold from hype to having proved its value I tend to take it seriously and learn it. I try to avoid getting sucked into tech stacks that I don't think will make it. Some times I am wrong, but usually not. I could have made my description of technology more generic, but in general, things like Django/Rails/Postgres/MySQL/NodeJS/Redis are battle proven technology that won't let you down. IMO that is the hardest part about being a lead engineer / CTO -- knowing when to say Yes and No to given technology stacks :)


I agree with everything you've written here, particularly your process of tool evaluation and refinement. My only push back is with respect to the way you characterized the merit of "overhyped" tools (in your previous comment) that didn't line up with the productivity profile of your personal experience. I agree that Django/Rails/Postgres/MySQL/NodeJS/Redis are all reliable tools, but I am not a fan of the philosophy that labels other people's work as unnecessary or harmful simply for existing outside the pantheon of worthy tooling.


In isolation of any constraint new things are always worth playing with and trying. I know I am certainly more biased (hopefully in a mostly good way) now than I was when I started in tech. Your thought strikes me as Very similar to when someone shares some hobby project they are working on and someone else doesn’t get it and states: “wow, you must have a lot of time on your hands”. So I do try to take new projects seriously while at the same time balancing time constraints. It’s hard :)


> If it were me, I'd prefer rails to django any day of the week, that's a personal preference

This is totally not what the parent is getting into. It seems like your comment reads much more like "my personal preferences are good" than the parent


They complain about Javascript and Kubernetes then praise Django and Redis, all in the abstract without any consideration for the totally different problems these tools actually solve. I injected my own personal preferences in-kind but labeled them as preferences instead of slamming other tools as overhyped and without merit.


> if we're talking about building an SPA you're typically going to be building a worse solution if you can't SSR your client views, which you need a js backend to do.

I have not used React with Rails and maybe I'm misunderstanding your comment, but the react-rails readme includes a section about SSR via ExecJS and integrated with Rails:

https://github.com/reactjs/react-rails#server-side-rendering


ExecJS functions as a JS backend in this case, its just a ruby interface for node/rhino/v8 that's a little bit clunkier to utilize.


That is obvious (it's in the name). The way your comment is phrased reads to me like you are saying that one can't use Rails and has to switch to a full JS backend for SSR. If that's not what you are saying, then it's unclear what you are actually saying and why you juxtaposed Rails / Django against a JS backend.


My point is that you literally cannot SSR your client js views without a js backend, so node has a clear engineering purpose in this respect. Yes, you can import a second backend into your main backend to avoid writing node, and that's fine, maybe your team is mostly made of rubyists or maybe your existing code base is ruby, there's nothing wrong with this approach, but it is more complication and layers of abstraction than just using a js backend if you must build an SPA.


> import a second backend into your main backend

This is not typically how the term "backend" is used in this context. Most mature apps will use a variety of tools and services built in a variety of languages, but this is not referred to as "importing a second backend." In fact, your comment is apparently the first use of the phrase "import a second backend" on the entire internet: https://www.google.com/search?q=%22import+a+second+backend%2...


I'm speaking metaphorically. When I say "import a second backend" I'm saying that the thing being imported into the ruby backend is literally a wrapper around the exact same runtime that is typically used to run js backends, the corollary being that the "ruby" part is unnecessary added complexity from an engineering perspective unless you have some other compelling reason to use ruby (which is certainly possible).


but if we're talking about building an SPA you're typically going to be building a worse solution if you can't SSR your client views

Is not worse, is fine. Mostly depends on the requirements, why go the extra complexity of SSR if you don't need it?


Most of the time SSR gives massive web accessibility benefits. Ignoring web accessibility because you don't want to implement SSR is likely a worse solution for a large portion of your userbase.


That's why I said "typically". One of the biggest complaints about SPAs is the perceived slowness due to the app loading and rendering assets in the browser, as well as the typical HN noscript complaints, as well as accessibility issues, SEO issues, and other problems related to back button behavior and url routing. SSR solves all of these troubles.


I'm not sure exactly what you're referring to, but being able to run Rust both on the server and client should allow SSR, no?


People complaining about that are generally talking about the churn in front end web frameworks and JavaScript development in general.

None of that eas necessary for WebAssembly to be developed. Unless you mean that the annoyance of the constantly changing JS ecosystem motivated people to push for it.


It's 2020 and I still see this "churn in frontend web frameworks" mentioned. I don't see how this is true anymore. The churn was very real in the early 2010s, but these days almost all web development is done in React [1]. If not React, it's either Vue, Angular, and Ember. I'm sure there are a lot of niche frameworks out there that are in use, but that's not what churn means. As a web developer you can learn React once and never have to pick up another framework again pretty easily.

Not to mention that at this point the complexity of web development isn't exclusively held within your framework choice. Thinking about global state, eventing, and architecting your project are where the real hard problems are.

[1] https://2019.stateofjs.com/front-end-frameworks/#front_end_f...


Nobody wants to hand-write wasm, you need a tooling pipeline to make that type of development workflow practical, its not any different from babel or typescript in this respect; using rust instead of js is "churn" just as much as any of the other options available for building web pages that you don't have to use.


>Nobody wants to hand-write wasm, you need a tooling pipeline to make that type of development workflow practical,

Yes and nothing about the Babel or typescript or node was required for the creation of wasm or a wasm compiler.

People complain about churn in the JS ecosystem because of the rate that frameworks and tooling rise and then fall out of favor.

I don't see the irony at all in people complaining about one ecosystem while being excited that they are being given a way to bypass that ecosystem all together.


> People complain about churn in the JS ecosystem because of the rate that frameworks and tooling rise and then fall out of favor.

And building web pages with rust is just another example of this phenomenon, its ironic because somehow its viewed as a positive thing by people who commonly complain about the introduction of new tools into web development ecosystem, but the power of rust hype somehow obscures the fact that this is exactly the same thing such detractors always complain about.

For the record, I love rust and wasm and think this is great, but I have always been opposed to the framing that people creating new web development tools is a bad thing.


People complain that the JavaScript ecosystem changes too frequently.

It is perfectly logical to believe that this is true, while simultaneously believing that allowing new language ecosystems to target the browser, could result in a new ecosystem that is much more conservative and changes at a slower pace than the JS ecosystem for whatever reason (a language with a larger standard library, a language with a different culture etc...).

A one time change to another ecosystem and then a slower pace of changes.

Who knows if this is will be the case, but it is a logically consistent position to hold, and there's nothing ironic about it.


It's not logically consistent. Why does it matter whether the tooling compiles down to js or wasm, if anything, expanding the web development ecosystem to include dozens of new languages and frameworks will increase "churn" as many new approaches become popularized on the front-page of HN. It's absolutely no different than a new js framework.


That depends on the scope of what you're looking at. Instead of looking at webdev as a whole, if you're just looking at individual ecosystems then while global churn may increase, it's entirely possible that you could move into an ecosystem with a lower level of churn.

It's perfectly logical to criticize the JavaScript ecosystem for having too much churn, while calling for a change that will increase Global web development churn, while simultaneously creating another subset of the global ecosystem with a locally lower level of churn--assuming you are only planning on working in that particular ecosystem.


It's not logical, it doesn't matter if the source code is javascript or not, if it compiles down to js or wasm it is definitionally equivalent to "js churn" because it represents an additional option in the js tooling landscape that a developer has the option of choosing from, rust is no different from typescript or purescript or babel or elm or any of the other variety of options available for producing web applications.

If you're suggesting otherwise, you'd have to explain what exactly is bad about churn in general and why those negative properties somehow dissipate based on the source language. Using your logic if every tool in the existing js landscape compiled down to wasm somehow "churn" would be a non-issue, but if that's the case then "churn" must mean something different than what is regularly complained about on every js technology thread on this forum.


You seem to be deliberately missing any point I'm making. I think it's because you've dug yourself in and you're more concerned about winning an argument than in what I'm saying.

But here goes.

First, I don't care about churn--doesn't bother me. It does bother a lot of people though.

Imagine there's a console called a ChurnBox. ChurnBox only runs code written in Churn. The Churn ecosystem is known for adopting and then abandoning frameworks at a rapid pace. People who run Churn teams have a reputation for for quickly adopting the newest Churn framework of the moment.

Now you are a developer who wants to work on the ChurnBox, but you absolutely despise learning new frameworks.

You could try to find a company that uses Churn with no frameworks, but you're a bit lazy and it's hard to find that kind of thing out just from reading job advertisements, so you stick with your old trust language--Molasses.

Molasses is known for its very expansive standard library and its very slow release cycle. The community is also very conservative, so most Molasses developers stick with the standard library and new frameworks are rarely released.

One day the company that makes ChurnBox announces that they are going to run code written in a low level language called Chasm that is designed to be easy to compile to.

Then some of the Molasses maintainers announce that they are releasing a Molasses to Chasm compiler.

You're excited because you think that maybe in the future you can completely ignore the Churn ecosystem, but still get to write programs for the Churnbox. You think that maybe in the future you'll be able to easily find Molasses Churnbox shops to work with that will adopt the conventions of the existing Molasses ecosystem, and thus rarely adopt new frameworks.


> You seem to be deliberately missing any point I'm making. I think it's because you've dug yourself in and you're more concerned about winning an argument than in what I'm saying.

I think you're just too condescending to fathom someone disagreeing with you.

> Imagine there's a console called a ChurnBox. ChurnBox only runs code written in Churn. The Churn ecosystem is known for adopting and then abandoning frameworks at a rapid pace.

Totally irrelevant. Pick a framework and use it, there is nothing preventing you from doing this regardless of the language; that's an indisputable fact. If you feel compelled to make engineering decisions based on superficial fashions rather than as an answer to specific needs, that's your mistake, it has nothing to do with the language or the framework.

> One day the company that makes ChurnBox announces that they are going to run code written in a low level language called Chasm that is designed to be easy to compile to.

Absolutely nothing has changed, that's exactly how all modern JS tooling works today! In fact, almost every language out there has some type of lang-to-js project, the only thing that makes WASM special is the memory and performance capabilities that cannot be achieved with vanilla js, however the "too much churn" crowd are usually quick to point out that low-level performance is almost always complete overkill for a web application.

> Now you are a developer who wants to work on the ChurnBox, but you absolutely despise learning new frameworks.

WASM presents you with the exact same problem because you now have to learn a new framework that models your applications with respect to a browser environment. 99% of JS frameworks from the last two decades continue to work today, so the only reason you would ever upgrade to something different is because you have a specific need that justifies the upgrade or you are the very thing that you're complaining about and rely on the slow pace of Molasses to prevent you from refactoring your production applications every time someone's personal project hits the front-page of HN.


>I think you're just too condescending to fathom someone disagreeing with you.

Kick rocks.

>Totally irrelevant. Pick a framework and use it, there is nothing preventing you from doing this regardless of the language; that's an indisputable fact. If you feel compelled to make engineering decisions based on superficial fashions rather than as an answer to specific needs, that's your mistake, it has nothing to do with the language or the framework.

First engineering decisions are often based on superficial fashions because fashion influence executives, investors, engineering leadership, and available talent.

Second I wasn't talking engineering decisions, but career decisions relating to potential work environment.

Third those aren't the decisions I would make, but I can understand how someone would arrive at them logically.

>Absolutely nothing has changed, that's exactly how all modern JS tooling works today! In fact, almost every language out there has some type of lang-to-js project, the only thing that makes WASM special is the memory and performance capabilities that cannot be achieved with vanilla js, however the "too much churn" crowd are usually quick to point out that low-level performance is almost always complete overkill for a web application.

So wasm is no better than JS, and it isn't a better compilation target. Except where it is better. But that's irrelevant because the straw men you're conjuring don't think that part matters. Gotcha.

It doesn't matter if wasm is actually a better compilation target (I think that it is) because it is perceived to be a better target, and is attracting interest in places that compiling to JS didn't (Blazor for one).

>99% of JS frameworks from the last two decades continue to work today, so the only reason you would ever upgrade to something different is because you have a specific need that justifies the upgrade

If you're talking about personal projects, sure. I don't know anyone who is concerned that they might have to switch frameworks for their personal projects. The concern is rapid adoption and abandonment of frameworks within potential employers.


> First engineering decisions are often based on superficial fashions because fashion influence executives, investors, engineering leadership, and available talent.

Executives and investors aren't hip to the latest software engineering trends, and even if they are, it's not at all common that non-technical executives are asking engineers to rewrite the business in a new framework, this is pretty much never what they want, even in the rare case when that's actually a good idea.

As far as engineering leadership goes, if they're rewriting the business based on technology fashion they're simply a terrible engineering leader and that is the consensus opinion within the industry.

Finally, the talent pool argument just doesn't square with reality; the JS talent pool is one of the most robust in existence, the idea that businesses are having trouble hiring engineers because their JS frameworks are going out of date is fiction.

> So wasm is no better than JS, and it isn't a better compilation target. Except where it is better. But that's irrelevant because the straw men you're conjuring don't think that part matters. Gotcha.

Wow. That's a very disingenuous twisting of what I wrote. It's not a starwman, it's an extremely common criticism of the js ecosystem, i.e. that it has too much complexity and that all these fancy tools don't contribute much of anything useful except fluff for cowboy programmers to pad their resumes. The use of the word "churn" implies that nothing useful is gained, otherwise it's not "churn" it's "progress" and something to be lauded rather than looked down on.

> The concern is rapid adoption and abandonment of frameworks within potential employers.

Right... so potential employers who want to build front-end web applications have an overflowing arsenal of battle-tested front-end tooling in the JS ecosystem and now WASM comes along and offers an exponential increase of nascent front-end tooling options, yet somehow you can't connect the dots between why the introduction of these tools in the WASM ecosystem produces EXACTLY the same effect as it does when new tools are introduced into the JS ecosystem. How long before we start to see "Why we rebuilt our front-end in Rust/Ruby/Go/Haskell etc" blogs hitting the front-page? It's exactly the same damn thing.


>Executives and investors aren't hip to the latest software engineering trends, and even if they are, it's not at all common that non-technical executives are asking engineers to rewrite the business in a new framework, this is pretty much never what they want, even in the rare case when that's actually a good idea.

>As far as engineering leadership goes, if they're rewriting the business based on technology fashion they're simply a terrible engineering leader and that is the consensus opinion within the industry.

Yes this is basically always a bad idea. That it's a bad idea and doing it makes you a bad executive/leader/whatever doesn't stop it from happening.

>Finally, the talent pool argument just doesn't square with reality; the JS talent pool is one of the most robust in existence, the idea that businesses are having trouble hiring engineers because their JS frameworks are going out of date is fiction.

You can always find candidates. Whether you can find candidates in your particular locations, for the price your company is willing to pay, that don't require more training or ramp up time learning your framework than your company is willing to privde is another story entirely.

>Wow. That's a very disingenuous twisting of what I wrote. It's not a starwman, it's an extremely common criticism of the js ecosystem, i.e. that it has too much complexity and that all these fancy tools don't contribute much of anything useful except fluff for cowboy programmers to pad their resumes. The use of the word "churn" implies that nothing useful is gained, otherwise it's not "churn" it's "progress" and something to be lauded rather than looked down on.

You aren't the one making the argument, you are picking the version of the argument that fits your argument.

One of the biggest arguments for using vanilla JS is performance, and there are plenty of people arguing against churn and for performance. But you dismissed the performance advantage by saying that people arguing against churn don't care about low level performance.

>"Why we rebuilt our front-end in Rust/Ruby/Go/Haskell etc" blogs hitting the front-page? It's exactly the same damn thing.

I'm sure that will happen.

The difference is that if you decide to work for a C# shop doing Blazor development, they are less likely to switch their company over to Ruby, than a JS shop is to switch to a new framework or introduce new tooling.

If that sounds good to you, then it makes sense to simultaneously dislike the state of the JS ecosystem, yet like the introduction of wasm.


Not being JS, and therefore not having to deal with all the traps that it contains is a pretty big point of difference.

It also provides a boost to other languages, which I think is good, there's no really good reason why JS should continue to be the only browser 'blessed' language.


> therefore not having to deal with all the traps that it contains is a pretty big point of difference.

The primary complaint expressed regarding front-end churn is that the landscape is confusing because there is too much tooling and too many options and that they're unnecessary and that people should just use JS, if you're saying "not having to deal" with js is a positive thing then you're implicitly saying "churn" is not a problem since "not having to deal with js" is the only reason that "churn" exists in the first place.

Either "churn" is bad and something like rust-to-web is another example of unnecessary tools complicating the landscape or tooling that helps to avoid the warts of js is a good thing and "churn" is a non-issue; you can't have it both ways.


The problem is not creating new web development tools but creating them for the sake of creating them. And what’s worse is people end up using them out of fear to fall behind. People complained about Maven that you first have to „download the internet“ to run a build. This is even more true for npm. In the meantime there are still native apps written in C and makefiles that work just fine. Also, wasm is just a new compilation target for Rust. This means if you know Rust you can write web apps. You have to learn less. With a new tool you have to learn usage, syntax and idiosyncrasies.


> The problem is not creating new web development tools but creating them for the sake of creating them.

Why is that a problem and who is the arbiter of merit with regard to publishing code to the internet? I can't see any other way to parse what you've written other than "people should stop making so much stuff"

> wasm is just a new compilation target for Rust

And the vast majority of the js ecosystem is just compilation targets for js, if anything rust seems even further removed from the web ecosystem since web applications typically don't require low level performance.


Agreed. I'm also completely onboard with what Rust is doing here. Language choice is fantastic, I think opening up the web to multiple programming languages is going to be very good for the web as an ecosystem.

But at the same time --

To all the people complaining that modern web browsers are too complicated for small teams to build and maintain, do you think WASM helped with that at all?

To all the people complaining that Javascript's lack of an extensive standard library makes it hard to quickly read/grok other people's code on Github, do you think that situation is going to get any better when people are using entirely separate languages to program the same webapps?

To all the people complaining that there are too many frameworks and tools being released for the web to keep up with, do you think that's going to get any better when suddenly every programmer and their dog can start porting any Open Source UI toolkit/framework to the web with low-cost DOM bindings?


> To all the people complaining that modern web browsers are too complicated for small teams to build and maintain, do you think WASM helped with that at all?

Kind of yes.

For simple browser as a general application platform you need a simple base technology where much can be shipped as library level. It would be fun to see WASM only browser with JS and CSS layouting solutions run as WASM compiled libraries.

So in theory WASM could be used as a first step to more simple browser but in practise it's propably just a fantasy.


> To all the people complaining that modern web browsers are too complicated for small teams to build and maintain, do you think WASM helped with that at all

Absolutley not. I love rust and would be happy to live in a world where I could write rust in any place where I would typically use typescript or babel or coffeescript back in the day, but none of that is going to be possible without an entire stack of tooling similar to that which already exists for the js-targeted ecosystem, and I have no problem with that, but people who ostensibly dislike "churn" claim to have a problem with new tooling and new solutions for building web pages and this is exactly the promise of wasm.

If anything, wasm represents the biggest shift in "churn" in the history of web development since it opens the door to dozens of new languages and frameworks that were previously impossible to use for web development.


> have always been opposed to the framing that people creating new web development tools is a bad thing.

You mean you support the creation of a new JS UI framework every other week? Or is this about something else?


Yes, I support developers doing whatever they want and releasing it to the commons for all to benefit from if they so choose. If that means "a new framework every week" then it is what it is, I don't see the problem with that. Just because someone wrote some code and put it on the internet doesn't mean you have to use it.


I am for the same thing, except when it specifically means a new JS UI framework every week. Nobody needs that.


> Nobody needs that.

You have no idea what people need when they decide to create whatever is they want to create, if you don't want to use it you don't have to.


That's the problem. Since I have no idea, I have to attend a meeting at the start of the week to discuss whether I need it or not with the staff because everyone wants the next hot thing. It isn't a coding culture or freedom problem as you put it, but rather a corporate space problem and much more often than not, it just causes unproductivity.


I agree with with the intent of your statement!

However:

> Nobody wants to hand-write wasm (...)

As a side note I want to point out that it is actually quite feasible to hand-write WASM in the text representation WAT.

It has some high level control constructs, type checking, some unique safety guarantees and a simple memory model.

Writing some (simple) programs in WAT and possibly a small language compiler for WASM is quite educational, fun and can be inspiring.

It can also build a more grounded intuition for the performance characteristics of WASM.


For sure! I find WASM to be quite readable, its actually an impressive feat of bytecode design.


None of that eas necessary for WebAssembly to be developed.

Theres no churn in WebAssembly tooling and frameworks yet, because WebAssembly has yet to make a big impact on many front end devs. We will see the jQuery-but-WebAssembly for making binding easier, Bootstrap-but-WebAssembly for making UIs and React-but-WebAssembly for managing components (not actually those libraries, but equivalents) come along and they will get adopted to make applications that serve large bundles of WebAssembly code that could be done better in simpler tools. That is inevitable.

WebAssembly will be used badly. Everything is, especially in web dev. Hopefully it will also be used well though.


The tools are not sophisticated, they are brute forced.

I'm not sure I've seen a more elegant approach to frontend web development in 2020 than Vanilla JS.

Maybe with some lit-html on top.


I feel like there's a movement to try to reel that in while keeping many of the advantages, via server-side DOM diffing over websockets (LiveView, Blazor, Stimulus Reflex, etc)


This was already possible with browser plugins.

Active State had ActiveX plugins for using Perl and Python instead of JavaScript.


I suspect it won't really hit until the changes that enable direct DOM access, and things that make dynamic languages easier to implement in WASM. These things: https://github.com/WebAssembly/proposals/issues/16


Is that still happening? It looks like it's been on the back burner for a few years now.


It took 15 years for llvm and wasm to be made. Sandboxing and cross compilation in a way that isn't crazy has always been a big problem for allowing the web to just run whatever it wants. Though I'm still skeptical on wasms real security given its use in browsers is independently implimented, non seperable from the Js stack, and we're still using JIT. Everyone else goes and impliments their own runtime too.


Independent implementations are important. We don't want another Flash.


"Everything Old is New Again: Binary Security of WebAssembly"

http://www.software-lab.org/publications/usenixSec2020-WebAs...


We could as well have used VirtualBox instead to sandbox applications on the web, and run them at near-native speed. No need for LLVM and WASM. The only thing needed would be to incorporate VirtualBox into the browser.


Besides VirtualBox being an single implementation rather than a standard, that wouldn't have worked—VirtualBox is just a hypervisor, not an emulator, and so "has" whatever ISA the host has. So you'd need separate binaries for x86, ARM, etc.

Google did a much better version of this, without these problems; they called it PNaCl. People still didn't like it, because LLVM IR is still not technically a formal standard, only a de-facto one.


> because LLVM IR is still not technically a formal standard, only a de-facto one.

I'm not sure if this is still the case, but last time I looked into it LLVM IR also changed between releases in ways that were not backward compatible.


It is both not platform-independent, and not stable. That's not to say that this means it's bad! It was just not designed for that kind of purpose whatsoever.


> So you'd need separate binaries for x86, ARM, etc.

I don't see this as a fundamental limitation.

Perhaps I should have said QEMU instead of VirtualBox.


Google first developed NaCl, and that was rejected because of this problem. The main driver, I think, was that you can't predict the computing landscape. Just because we have certain common architectures around right now, doesn't mean there won't be new architectures around 20 years from now, and/or doesn't mean the architectures around now won't be dead by then.

The thing about the web, and web standards, is that web clients (browsers) are expected to be able to work with web apps, on a common "web platform." Users expect to be able to take an up-to-date web browser and point it at any arbitrary old website, and have it work. And web platform engineers agree that this is how things should be, and make sure that browsers do everything they need to do to make this possible.

But having the possibility of a web app delivering one of N different binaries for each of a bevy of random ISAs—and no requirement to support all ISAs, only whichever ones the site's author felt like deploying—means that the browser authors of 20-years-from-now, to support users' expectations of arbitrary old web apps "just working", would need to ship N virtual-machine interpreters, one for each ISA that people ever compiled web binaries for.

Basically, it'd be a not-quite-combinatorial explosion in VM/runtime implementation work, which would decrease the quality that any one VM/runtime could have (which is really bad when those very implementations are one of the main sources of security vulnerabilities for attacking computers today.)

And, even then, there'd still always be sites broken because they shipped native code only for a platform nobody ever bothered to build support into browsers for; or used an instruction only available on some extension of an ISA that only appears in some particular proprietary chipset.

The web-standards people all agreed that, if you were going to have "object code for the web", it was much better to constrain the web to one "abstract machine" ISA. All the browser authors could then put all their effort behind implementing just one high-quality runtime/VM serving that abstract machine.

The only dispute, after that, was what form the ISA would take. Google suggested LLVM IR, and was shot down. WASM came up with their own proposal, and it got accepted, probably mostly because it was a proposal for a standalone formal standard, rather than a de-facto part of something else.

Probably, any ISA that was standalone in a similar way could have been used instead of WASM. But that is a surprisingly rare quality in an ISA. (For example, neither JVM nor CLR bytecode is a standard independent of the platform/runtime it's a part of. You can't become a member of the "JVM ISA steering committee", only a member of the Java working group.)


"took us 15 years of being flamed every time you pointed out you didn't like Javascript"

Rust is a special case here, light weight, and loved by JS devs.

I'd still "flame" anyone who wants to drop a C# or Java VM on their users.


> Rust is [...] loved by JS devs.

Is it? That's interesting, because to me the two languages seem to be polar opposites.


When you're looking for a new technology to invest in, something that's very different from your current focus can be quite valuable, as it gives you a lot to learn, a lot of new things you couldn't do before, etc.

"loved by JS devs" is maybe a bit strong, as there are a LOT of JS devs, but we do enjoy a lot of JavaScript folks getting involved with and/or using Rust, and I and others have given a bunch of talks about Rust (with and without wasm) at JavaScript conferences that were well received.


What killer features does Rust have that I'm missing out on writing web apps in plain old Javascript? The only impressions I got trying to learn Rust are that it's very inconvenient, esoteric-looking (syntax is garbled about randomly from the norm for no apparent reason other than to be different - why go to such pains shortening everything like pub and fn when fn x -> int takes up so much more space than int x?), and really just not something I'd want to go through just to write a webpage.

Static checking etc. is nice but what's wrong with things like typescript? Do we really want web libraries fragmented into a million languages?


Just because they're JS devs, doesn't mean that they're doing web development with Rust, to be clear. That's part of the whole "expanding what you can do" bit.

> randomly from the norm for no apparent reason other than to be different

This has been written about in a number of places, and I don't have time to get into it, but a lot of languages are trending in this direction with syntax because it is more regular in a language that contains pervasive type inference. It's not random.

> Static checking etc. is nice but what's wrong with things like typescript?

Typescript still has node (or deno, still V8) as a runtime, so you're still dealing with its runtime, and all the cons (and pros) there. Additionally, the guarantees you get out of TS and Rust are different, as they have pretty different type systems, even if they're both statically typed, and TS is closer to Rust than it is to JS.


A dislike of the function keyword used in a language seems like an incredibly strange and arbitrary way to make a decision about it


The Rust ecosystem shares some characteristics and idioms that are familiar and appealing to JS developers, like:

- leaning on (some) pragmatic FP concepts

- modern tooling and dependency management

- very _approachable_ and solid documentation, books etc. (see MDN, Rust by Example, Rust Book, Eloquent Javascript...)

- both can target the browser (well)

- Mozilla has a very good name in the Web dev community. I think at least; this is definitely relevant to me.


You know what’s even more interesting? Using a low level language for no obvious performance reasons in the browser (as in, what performance critical thing are you solving?).

I just remember having to do a lot of my college classes in C++, where some of our basic programs needed variables to be constantly cast into another type just to do something with it. I remember, man, I cannot wait to have to deal with this when I parse json data from an api request, bring it on.

I am JS developer, and apparently I love Rust.


I start missing Rust approximately 30 minutes into every new Typescript project I work on because there's no good alternative to Rust enums (and other benefits that come with them, like pattern matching). I can't derive basic traits for my data, like specifying equality or a canonical ordering. I can't define math operations for my calendar types, my units of measure, or my 3D graphics structs. (De)serialization of data is a minefield.

In what world is Rust a lower level language compared to JavaScript at this point?


To preface, I don’t know Rust. Just looking at that code, that stuff has references to memory. In what world is a reference pointer not lower level?

That seems like a much bigger minefield, but I’d be happy to be educated on this. I’d ask that part of that education include why I’d ever introduce this class of problems into web development.


I think that's the question most people starting to learn Rust ask. The answer is a bit hairy because of how much modern garbage-collected languages normalized data sharing. Whether you're writing Ruby or Java or C# or Python, you can always do `a = BigStruct(); a.name = 'tiny struct'`.

Rust is the first mainstream language that looks at this code and says "Hold on, are you the only one modifying `a.name`?" And the intimidating part is not the question. The question is not for you. The intimidating part is that the Rust compiler asks itself the question, and it always knows the correct answer.

What the Rust compiler is looking at is the same thing the senior programmers are looking at during code reviews at your company. The people who know who should own which data. In C, once you `malloc` a struct someone has to know to `free` it. That's the rule. You acquire a resource? You're in charge of releasing it. Or at least passing the responsibility of freeing it over to someone else.

Rustc is the ultimate code reviewer. Yes, it's super pedantic, but also, hey, it can take any amount of insults you can throw at it and still thrive. If you run out of things to call it and the code still doesn't compile, guess what, the problem is probably on your side.

I repeat, Rustc is the perfect code reviewer you could ask for. And it's available 24/7. Compilation speeds are not very fast? Ok. Compare them to code review times from your peers when using a lesser language. 10 seconds for an incremental compile doesn't seem so bad when the alternative is to get instant feedback and a comment 16 hours later that you missed an edge case.

Sorry for the big post. Thanks if you read the whole thing; kudos if you scrolled straight down. Be kind, work hard, and good things will happen.


> In what world is Rust a lower level language compared to JavaScript at this point?

The embedded one. Can't write bootloaders or kernels in JS. I mean, maybe you could but... why would you?

(On mobile here, bear with me if no newlines came through)


I see your point, but I think it only exposes the weakness of the terms "high-level" and "low-level" when used to describe programming languages. Rust is both low-level in that it can be used to write code for very small computers, but also high-level thanks to all the useful abstractions it provides.

There is no axis on which JavaScript (with or without Typescript) is a higher-level language than Rust in 2020. Rust offers a richer standard library, much MUCH more powerful facilities for modeling data, state-of-the-art (de)serialization that's one crate import and a one-liner annotation away.

I don't know who came up with asm.js, but that person definitely knew. JavaScript's destiny is to be a building block. Yes, it feels nice to write plain JS without importing any libraries. It's nice knowing the difference between `Array#slice` and `Array#splice` without looking it up on MDN. But it felt just as nice for ASM coders in the 80s to bypass a C compiler, and look where we are now.


You should try PureScript, ReasonML or Bucklescript.

You’ll get all that (and more), and it’ll be higher level.

Additionally, these languages are designed to target the web, so they have a lot of libraries and bindings for JS libraries already written.


I have. Purescript doesn't have typeclass derivation. BS technically does, but the time I tried it the compiler plugin was so raw it was basically useless to me. BS/Reason don't support operator overloading. Etc etc.

Redex's library selection is anemic compared to Crates.io, and so is Pursuit's. There are a couple of gems in both, but for any given use case it's just as likely you will have to write your own code than find a workable solution in the package manager.

More importantly, Rust has orders of magnitude higher bus factor than Purescript and ReasonML combined. Phil Freeman has long since left his project, and so will eventually Hongbo Zhang, at which point Reason will follow PS's slow downward spiral into open source limbo.

Believe it or not, I actually put a bit of thought into this. And I do firmly believe that Rust is by far the best ML-like language for the web available to the public right now. Perhaps I should finally write that blog post I keep putting off.


>> no good alternative to Rust enums

So you are missing a basic feature that a bazillion other languages can give you. I am still amazed how people think that the ML features in Rust somehow new inventions.


There are no good alternatives in Typescript. Nobody claimed that there are no other languages with enums.


Just because that was your C++ college experience, does not mean that reflects the experience of C++ today, let alone Rust today.


Look, that web app in that tutorial is roughly 200-300 lines of code in regular js/css/html, maybe less.

The onus is on you guys to tell me why I’d add cognitive overhead for simple webapps, and if it’s coming from the Rust community, I’m expecting to hear ‘performance’, in which case I’m game.

I have the same opinion on overheard in the JS churn cycle, it’s up to you to make the case why the overhead makes sense for the cruddiest of apps.

We shouldn’t coronate things willy nilly. If Rust is the one true blood prince of the C era, I expect him to reign in similar domains, but please don’t flex that power in domains where you are a sub standard solution.

All hail Rust, but jesus, slow down. That’s a simple js app.


I don't know what to tell you. It's a blog post. Simple apps are how you start to learn things. Nobody is saying that doing this is the most amazing end all be-all JS is dead and over lololol.

Some people like Rust. Some people like web apps. Some people want to use Rust to write web apps. If you don't, there are tons of other technologies you can (and should!) use to do that, and someone saying "hey if you're interested, here's how with a small example" isn't a threat to any of that.


Fair enough. The thing about the developer community is that many of the best and worst trends came from innocuous blog posts. From innocuous conference talks. It’s a blessing and a curse.

I no longer treat these posts as banal.


Personal opinion: Rust seems as bizarre choice as C/C++ would be for web apps, regardless whether it's for the frontend or backend - I'd personally much prefer to use a higher-level language like C#, F#, Go, Scala or Kotlin.


F# turned out to be a great option for us, we combine it with Elm and they work very well together.


If you feel comfortable, consider making a PR to this repo of companies using F#: https://github.com/Kavignon/fsharp-companies . Always good to have another success story!


Actually that is how we were writing web apps in 2000, with Apache and IIS plugins written in C and C++ plus some scripting language like Tcl, but I wouldn't do it today, besides providing Web UI for IoT devices.


I was there :) I have memories of writing CGI apps, and it's not something I'd relish a return to.


Well, Rust is higher level than C, but not nearly as heavy as the others you mentioned.


The qualms you describe are with static typing, not necessarily low-level languages.

And considering TypeScript is so favored as a way to "herd the cats" of JavaScript typing, it seems not everybody shares your opinion.

People go to Rust not necessarily because it's "fast" or "low-level", but possibly because it has a expressive type system that lets you be precise and correct without having to be verbose.

Sure, the fact that it can be used at the low-level and for performance-intensive applications can be a very good thing, but it is far from Rust's only merit.


I promise you that the amount of JS devs who LIKE Rust is not very high. I cannot imagine going from a language like JS or TS to the confusing syntactical monster that is Rust.

I realize this is an unpopular opinion, but JS/TS is WAY more human readable than Rust is. It's almost the equivalent of programming in C for the browser. Plus, literally anything you could ever want to use in already in the JS ecosystem and doesn't have to be reinvented in Rust.


I do not disagree on the amount of JS devs who like rust (how many even know that rust exists?), what you think is a syntactical monster I find more readable than JS (except when generics are abused, which is rare in my experience). Just by having no parenthesis around "if" it becomes a lot cleaner, and together with anonymous / arrow function syntax in js it becomes even more confusing. Its still very subjective, but if you're going just for clean syntax, I feel like python is the real winner.

Also, explicitly borrowing in method signatures is a BIG plus for me as its a lot easier to understand an API when its clear whether it mutates or not an argument, as opposed to JS where number arguments just cannot be mutated and pointer arguments (such as arrays) always can be mutated. I find understanding random github projects a lot easier in rust than js or python.

I'm not saying you should use rust, but many like rust specifically because of its readability and even find they're more productive in the long term because of its "high level" features and explicitness (although LEARNING it is a lot harder).


> literally anything you could ever want to use

...so but how's the quality of npm ecoshitstem? last time I heard you guys had some trouble in paradise, plus everything noteworthy is owned by either facebook or google, both having huge antitrust issues lately


I still hope that we'll get a lightweight (or lightweight enough, at least) .NET VM on WebAssembly. Blazor is far from lightweight currently, but it's runtime wasn't purposefully written for the task and is based on Mono.

Rust certainly has an advantage here, as it was design to not require a runtime system.


That is more an issue with the current implementation than anything else, there are other managed languages with much better AOT compilation support for WebAssembly.


I'm mostly concerned by download size. That might remain relatively large even if AOT compiled, because GC and runtime still need to be shipped with the application.


That is why linkers are relevant.

Tiny Go, D and AssemblyScript all manage small enough download sizes.

Then just like with native apps, one doesn't necessarily need to download everything at once.


>> Rust is a special case here, light weight, and loved by JS devs.

Is there any data on that?


> It only took us 15 years of being flamed every time you pointed out you didn't like Javascript

I don't think that has ever been an unreasonable reaction. Nor is it all that unique: if you want to do iOS development in something other than Objective C or Swift then the iOS dev community is going to tell you that you're making a mistake. And they're often right.

I've used Rust. I like Rust a lot. But one thing I don't see mentioned in this article at all: "debugger". Or "inspector". The web development stack has some incredible tools to aid in development, debug code, memory usage, etc. etc. Can you still use them with this single page app? If not, what strength is Rust bringing here that you can't achieve with the standard web dev stack?


1. You can integrate existing high-quality libraries more natively

2. You can write your backend and front end in the same language which can be appealing to increase the ability for more people in the org to contribute code (eg you don’t need to hire a traditional web developer).

3. Tooling in rust is different. I’m sure the state of debugging and inspecting WASM will get better (I don’t actually know the state) but you’re now not limited to the tooling just in the browser and you can run all of that offline in a server

4. Performance

5. Tooling isn't there yet but it's an area of active development. Debugging in Chrome: https://developers.google.com/web/updates/2019/12/webassembl...


> you’re now not limited to the tooling just in the browser and you can run all of that offline in a server

When it comes to DOM interaction (a key part of this article) you absolutely can’t.

I see the logic in compiling a native library to Rust and using it via WASM. It’s the part where you bring the whole DOM WASM-side that I’m sceptical about. As you say, tooling isn’t there yet. Is performance even there yet? Last I heard WASM<->DOM interaction incurs a performance penalty. Right now it all feels like a “because I can” capability. I’m not saying that will always be the case, but for right here and now I’m sceptical.


You can write your stuff in such a way to minimize the WASM<->DOM interactions. And there is a speed benefit for pure WASM stuff, so I can see it being a net win if you're using a good framework that pushes you in the right direction.


Not sure about this particular instance, but I can tell you about a project I’m working on in C++ that compiles down to wasm.

I was working on some prototype code that would eventually run on an embedded device, so C++ was the obvious choice for that work. Getting it all running in a linux app using SDL was a good starting point, but then I needed to collaborate with some very non technical people. At that point using Emscripten to compile the code to run in the browser ended up being useful work. I can upload the resulting html, js, wasm file to a webserver and anyone with a link can take a look.

Most of the debugging I do outside of the browser. But I was able to do some debugging using the web inspector.

But you’re right. I’m not sure that the tooling is quite there yet for a more generic single page app. My code tends to destroy the browser if left open, and it’s hard to tell why (most likely something memory related that doesn’t translate well).

I’m tempted to do more experimenting with cross compiling to wasm, but I’m not sure if my goal was production code on a browser I’d target anything but HTML5 CSS and JS+framework.

That said, there’s something to be said to being able produce a webapp while having no direct need for HTML & CSS. In the article author was still using HTML and CSS, so mostly the Rust was to avoid JS. My use case allowed me to not have to know HTML to produce a page. There’s also lots of precedent with game devs using Unity or Haxe tool chains to cross compile to the browser.


I think the same could be said about node, though. A big ideal would be to be able to use the same libraries, code, tooling, etc. instead of needing to fragment with multiple languages/impls/skills/tools/etc.

I don't know why anybody would choose Javascript for a server except for the fact that it's also Javascript, so you can largely share your stack between front-end and back-end. But I'm not convinced Javascript is all that great of a choice compared to most other languages out there for backend development, albeit it being very popular.


oh yeah, why's that then?


It's single-threaded, it's much slower compared to languages like Go, Java, etc, and for any app that performance needs to be a consideration, Node would just never be my first choice.

If I'm building a blog, then maybe it makes sense, but for high profile web apps/infra I just don't understand the appeal beyond using similar tooling and having the learning curve be the same (e.g. at least on the surface your UI and backend engineers can theoretically be the same).

I guess it generally depends on what you are building, but I tend to prefer something a bit beefier. I also love Go, so I may have a bias there, though the performance bits are data-driven, not subjective.


When you figure out that you still have to use the DOM and CSS you're gonna turn right back around like grandpa Simpson.


That and ui/ux programming. Gotta make sure you are displaying the right stuff, and be meticulous about your UI events/interactions.

JavaScript might suck, but it’s got all the normal basic shit other languages have. Loops and shit, variables and stuff. You guys aren’t complaining about a little old language, you’re complaining about UI development and how tedious it can actually be if you want a professional level of polish.


don't have to, can use webgl.


Bye bye, accessibility!


Are we? There are and have been plenty of languages that target JavaScript for many years now. What does webasm actually enable that wasn't possible before?


Performance (asm.js was pretty close but required special support from the JS engine too).

But maybe more importantly: WASM frees Javascript from the future burden of being both a human-friendly programming language, and a machine-friendly compilation target.


Performance for a few low-level languages yes (Rust, C/C++), but with the GC support stalled and even less developed language interop story, there isn't anything in the horizon that promises to make it a good platform for today's HLL's. Meanwhile there are many nice HLL's targeting JSVM with good performance.

I think the human friendliness transition happened already when most JS programmers started compiling their JS-version-du-jour to legacy-JS.


Plugins already made that performance possible.


As per usual in the world of languages, nothing new is possible, but things can be easier and perform better.


This is what WebAssembly has to say about it: https://webassembly.org/docs/faq/


It's funny because non web folks complain about Javascript fatigue and no standard libraries/approaches - so now on top of that we are going to mix in other languages and everyone bikeshedding the same thing in N languages? Seems maintaining apps with all this code is going to get a lot harder.


There is/was Scala.js too, and other similar things that "transpile" to JS.


It's the year 2020 and I'm still out here writing JSP, PL/SQL, and VB6 lmfao


I've actually been wanting to do a video series on using VB6 to build a modern web application. It would have a Windows Form GUI obviously.


A company I used to work for did an acquisition where the company being acquired had written their own web server in VB6. I never did find out why...


Wtf how is more important than why actually LOL. The why probably involves consultants and a lot of powerpoint


I found this [0] suspicious-looking source code from 2002 that may be able to give some answers.

[0] https://www.developerfusion.com/code/2184/dm-web-server/


You should play with a language like Go or Python and get that rush of modernity


Idk why but I just really like javascript. I am not the type of person that likes to optimize things or make a deeply viable and robust and performant output. I just like to tinker and have as many things as possible abstracted away from me so I can make stupid apps based on maps and if statements so that they can do stupid things in a matter of 20 mins.


Please tell me this is a joke.


I know of at least two insurance companies in my country that still have their administrative software written in VB6 (probably along with some shoddy web app for IE8).

Also see "IT runs on Java 8" https://veekaybee.github.io/2019/05/10/java8/


There are two phases of a technology: the adoption for new projects phase, and the legacy phase. In the first phase, new projects dominate the programming activity in the technology. In the second phase, maintenance and improvement of already existing codebases dominates the activity. Technologies like jsp are in that second phase.


Wish it was! My life is full of Dim iX As Integer and CURSOR AS right now and it's pretty gross. But it's better than writing css or react at least.


um no it's not


Can't we agree that they're both just awful?


...and to reset the score, yet again, on usability/accessibility. sigh.


"where you can choose whatever language you want to program for the web" if by chosing meaning doing PoC during the week-end maybe ... for real work we don't have a choice.


I mean, don't do this for other reasons, like it's an accessibility nightmare.


You are getting downvoted because the rendered page is HTML, which has the same accessibility properties as always


There's more to rendering than looking at the DOM at static points in time. There's certainly more to accessibility that conveying changes to the dom which may or may not be "visible", or certainly relevant, to the end user.


Other than 10% of the globe unable to run wasm, what accessibility concerns come to mind here? Are there any that wouldn't also apply to a SPA that uses JS to manage the DOM?


Some people believe that wasm will encourage people to turn web pages into one big <canvas>, which would be quite a hit. I don't personally believe that, though.


I had in fact considered playing with and possibly deploying something like this if I liked it.

I'd like to thank this thread for reminding me why that's probably not a good idea.

Are there any good resources to read up on for making sure accessibility is good in <canvas>? I assume you'd have to implement it yourself.


> Are there any good resources to read up on for making sure accessibility is good in <canvas>

whatwg.org[1] is less than helpful when it comes to to describing how to make sure canvas elements are accessible; MDN[2] is a little more helpful. tl;dr: it's a lot of hard work.

I have found some relatively useful posts in various places (eg: [3]) for people (like me) who don't like being told not to do something. These articles generally date from around the mid 2010s but, given that there has been little development in the <canvas> world since then, their advice is still good today.

[1] whatwg - https://html.spec.whatwg.org/multipage/canvas.html

[2] MDN - https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/...

[3] - https://developer.paciellogroup.com/blog/2015/02/html5-canva... - including this link because post was updated in 2020


Thanks, I've favorited this.


I have no idea personally, I imagine you'd have to implement it all yourself, yes, and I don't even know if the proper hooks are there, honestly.


The typical concerns that come with dynamic DOM manipulation....


Funny thing is, this is not strictly needed anymore, as TypeScript is probably the best language in existence right now.

edit: OK, maybe not compared to Rust... but here it wins by ease of learning.


I'll do just about anything to avoid JavaScript, including learning Rust.


While I do get annoyed at JS often, I get just as annoyed by Rust. Sometimes, I really just want to read a CSV and create a Map<String, String> from a few of the columns. But oh my god if Rust doesn't make that a nightmare. Is it a String? an &str? Do I want to allocate that string on the stack? I won't know the size of it at compile time, so of course not... right? So no to 'static? Give me a nice middle of the road like TypeScript, Kotlin, OCaml/Reason... Rust just goes SO FAR in the other direction that I don't find it a pleasant experience. I get that it's the ULTIMATE in performance, I just always question if I really need it for doing web work.


I think this Rust complaint is just inexperience.

You wanted a Map<String, String>, and that's exactly what you should have made. A String is a convenience wrapper for storing a &str on the heap, so storing a String on the heap is redundant. The other options are for performance concerns, which you haven't mentioned any.


Now make it a global thread safe hash map where the keys are references into a mutable memory map that can move it's address (be remapped.) That's currently where rust is making my life miserable today. In C++ this is quite straightforward, albiet full of footguns.

Every time I need a static mutable thread safe map in rust for a cache or registry or some such thing, I groan.

I have a love hate relationship with rust. I love a lot of things about it, but sometimes I get so fed up with the borrow checker I drop into unsafe and just write the problem off as something better not done safely.


Maybe I am misunderstanding your needs, but could you not just use a RwLock[0]? For being global you will need to use the lazy_static[1] crate. Admittedly, I have not tried doing this directly though.

[0] https://doc.rust-lang.org/std/sync/struct.RwLock.html [1] https://crates.io/crates/lazy_static


That's part of the solution I'm working towards.

I'm using 'static references for the map keys, but they're not really 'static, so I use unsafe transmute to create them and I rebuild the map when the memory map changes to a different address.


Rust wants the things that are expensive to be obvious. Going from &str to String allocates, so that requires an upfront conversion before use.

On the other-hand String has a Deref impl to &str, so you can call any method defined on &str directly from a variable of type String. That doesn’t require any cast, and those methods can be called directly on the variable making it very easy.

To me this feels like a good trade off in ergonomics vs. performance.


Rust was originally designed to be a systems programming language to replace C++. When you have experience in languages like C++, all of the questions you asked have extremely obvious answers, to the point of being an automatic decision.


Dlang let's you write GC code at first-- let's you allocate a string however you want-- and later convert that into @nogc code, or into GC code with manual deletions. The generational garbage collector will help with deleting strings. Best of both worlds-- fast prototyping, with the ability to drop down for faster code.

Furthermore, like interpreted languages, Dlang compiles very fast because the language implementers believe fast compilation is a core goal.


I always rather thought D had a lot of good ideas. But it hasn't caught on, and I don't see that changing.

If you can't get a job in it ( comparatively), can't find quality libraries for common tasks, and don't have a large user base running into painful edge cases before you do, it's going to be a rough ride.


To me this is a non issue here, I find Rusts strings far easier than many other languages. (and also safer)

Overtime this knowledge becomes second nature to get used to and less of a pain point to grok.


Now that there are viable options for Rust via WASM, I'm actually excited to work on the front end.

TypeScript feels the same way that CoffeScript did 6 years ago: it's a stopgap measure until JavaScript picks up the transpiled language's good features. Also, it's another layer on top of the already unruly JavaScript toolchain.

In comparison, Rust and its toolchain are pleasures to work with. I wouldn't choose to use TypeScript unless there is an explicit requirement to use JavaScript.


I'll fight the lifetime analysis with a smile if it means avoiding the null reference errors that happen all the way across the code and the bad type conversions causing all kind of user visible issues.


Try scala-js [1], there is a nice side-by-side [2] comparison with JavaScript, but with a powerful compiler behind.

[1]: https://www.scala-js.org

[2]: https://www.scala-js.org/doc/sjs-for-js/


I can only imagine how many JS diehard fans are screaming in rage when they read this. I’m with you.


All three of them :)

Mostly there are diehard haters for JS, not diehard fans.

A lot of people who use JS are pragmatists and fantastic problem solvers that I have a ton of respect for.

-Someone who's not too fond of JS


I used to be a die-hard hater but converted to a die-hard fa n when I started actually learning the language. Once you know the quirks it's quite nice.

Of course since it's dynamically typed it's hard to use for larger programs, but TypeScript solves that.


I took the time to learn some modern JavaScript and I love it...

But give me a statically typed compiled language any day!

The browser is so important that we shouldn't be locked into using a single language, so I fully support being able to choose between Rust or a JavaScript!


You read my mind.


Watch out, it will be trans-piled to JavaScript, so there's a chance you might be learning it while debugging!


WebAssembly is not JavaScript. This code does not transpile to JavaScript. There might be some JavaScript glue code to call the WASM that's bundled into the website, but that's not what your comment is claiming.


Whoops was a bit misleading in hindsight. I was just trying to be facetious, but yes you're right "WebAssembly is not JavaScript"


Great post.

I'd love to see one talking about building a full stack app using Yew and Actix (or Rocket). And good ways of sharing types between the frontend and the backend.


You should have a look to this project built with actix, diesel and yew:

https://github.com/saschagrunert/webapp.rs


This is great. Thank you!


Thank you! The sharing common types and logic is pretty easy using cargo workspaces, will include this in future posts :)


Awesome!

I think it would also be cool to explore alternatives to REST with JSON for a full stack Rust app.


I have a hobby project where I am serializing everything to bincode with serde, and communicating with my own protocol over websockets. its quite nice! Rust's enums make this kind of serialization and communication super easy.


That sounds very interesting. Do you have a public repo I could check out?


sharing types you just have a shared lib. don’t have to do anything special. and you can serialize them easily with serde


Until we fix the wasm->dom bridge to be just as fast as the js->dom bridge, js will still have a leg up.

AFAIK the hard bits are in bindgen which makes js structs look like wasm ones and vice versa.

I’ve been doing frontend for over a decade and quite bullish on wasm once that boundary isn’t so expensive.

It should be possible to make super performant apps with wasm. Figma sticks out to me as someone really pushing the edge.


I'm still baffled by the need for a boilerplate Javascript file to call into rust. Why can't I just include the wasm directly from HTML?

Seems like someone's feelings will be hurt when they realize that many developers want to skip JavaScript and just work with wasm.


committee is working on a DOM-access-directly-from-WASM spec. but for now we still need to use things like wasm-bindgen to generate the boilerplate needed for us.

It's still fast, tho. Have you tried https://sandspiel.club/ ?


The browser API requires it.


It's just two lines of JavaScript to call into wasm: 1 line to load the wasm, another to call a function in it.

This adds latency, and is easily replaced by a single tag.


This is a big reason that I think wasm is a step in the wrong direction. It's basically nudging people into making things even more opaque.

How, for example, would you go about blocking ads in this kind of app?


> This is a big reason that I think wasm is a step in the wrong direction. It's basically nudging people into making things even more opaque.

WebAssembly is a tool, not a technique. It's not meant to replace JavaScript or the DOM, it's meant to supplement it in projects where it makes sense. Is it any more opaque than any other transpiled and minified SPA that in no way resembles anything remotely human readable anymore? I don't consider this a valid criticism of WASM.

> How, for example, would you go about blocking ads in this kind of app?

Probably the same way it's done now, by blocking requests to blacklisted domains known for hosting advertisements.


It's true that WebAssembly makes web apps more opaque, since it's a compiled binary vs JavaScript code.

However, ad blocking should the same as for JavaScript single page apps. The Rust code in this article hooks into JavaScript APIs to create a standard webpage, using standard HTML elements and updating the DOM - just like a JS app would.


Sites using WebAssembly still need to use the DOM just like JS sites, and be equally ad-blockable. Technically a site could render everything as an opaque image in a canvas element, which would be hard to do ad-blocking on, but there's nothing about WebAssembly inherently connected to canvas elements. Whether a site uses WebAssembly and whether it puts all its contents in a canvas element are orthogonal issues. JS sites are equally capable of putting all their content in canvas elements, and very few of them do.


If it’s actually harder to block ads in apps built like this, that’s a great reason for site owners to choose wasm.


You can still use normal adblockers with this. Most of them just have a blacklist of domains that they block the requests for. Wasm won't change anything about that.

The requests are still exactly the same as in JS, so blocking will work exactly the same.


And a great reason for users to resist it.


I've been going through Programming WebAssembly with Rust, from Pragmatic Bookshelf and have been getting interested in the possibilities. Yew is just one part of the book, but it's a good crash course and motivation for digging in.

https://pragprog.com/titles/khrust/


I was thinking of a project I wanted to start, but I don't know if it would interest anyone.

Basically, I'd like to try to build all sort of app with program components similar to the Elm architecture (init, update, step, subscribe).

Those programs would be a bit similar to objects in OO except with stricter rules.

I did a proof of concept and it works very well for simple tasks (like subscribing to HttpServer.sub and getting messages for requests to your update function). The good side is that you can have very simple API (I'd really like it to be usable by anyone), it's very easy to go massive multi-threading, refactoring is easy, you never introduce hard dependency in your code and having a very strict model helps organizing code.

I wanted to support wasm with a subset of the API (that's why I think of it now).

But I stopped, because I realized it would certainly be of little interest except for show.


Can anyone please tell me how the author able to use html syntax in rust?

I get that there are macros, but how are html tags valid syntax? Is rust just interpreting the html content as strings?

I've only ever seen C macros, and I don't remember seeing this kind of wizardry happening there.


It's using this: https://yew.rs/docs/concepts/html and no, just like JSX, it's not HTML strings. It's building HTML-like AST in macro.

One great thing about JSX (and Yew) is that it's way more secure way to build an HTML, because you don't need to worry about escaping behaviors as much. (Sometimes you still need, like injecting inline CSS etc. but not that often)


whoa, so you can write like a DSL parser using the rust macro matcher syntax?


Essentially yes. Macros in Rust are very powerful, similar to Lisp macros (although they are naturally clunkier).

As you would expect, you will find macros for HTML, JSON etc. in libraries, but there are also quite a bunch of smaller, frequent macros that simply reduce common boilerplate.


And then there are the Rust macros that let you write code in Lisp syntax, although I don't recall that any of them had a defmacro form…


Oh god, what have I seen


I'm sorry; I should have thought about the more sensitive readers.


rust allows for procedural macros (basically a function that takes a token stream and returns a token stream, and yes you can do parsing in this funciton

https://doc.rust-lang.org/reference/procedural-macros.html


Macros in Rust don't need to be completely valid syntax. There are some requirements such as balanced parens and strings but the macro API is basically a token stream. The syntax requirements are just so that the compiler knows when the macro ends.

https://doc.rust-lang.org/1.7.0/book/macros.html#syntactic-r...


As a lot of less-than/greater-than tokens


waiting for a rails like framework that would let me use only one language full-stack.


Isn't this example already using Rust full-stack, or am I missing something? You still need to write HTML, but that's also the case with Ruby on Rails.


Elixir does this via Phoenix + LiveView, tho it's not quite as fully-featured as Rails.


You can definitely already do that in Rust, although I'm not sure how mature all the tooling is yet and rather than using a single framework you'd be using a few different toolsets (e.g., Actix/Rocket, Diesel, Yew).


With Stimulus Reflex we are getting pretty close.

https://docs.stimulusreflex.com/


F# does this via Fable.


Hi, would you be able to give your experience in brief, how is fable? And elmish? How happy are you with fable?


This is a tangent, but does Fable still require you to manually toposort your module hierarchy in an XML file? That was my experience using the language with VS Code circa a year ago.


I've actually not been an actual user besides running a random repo to see how it all fits in nicely. But... i've been playing more with F# and it really is a fun language to use and capable.

I was able to compile a number of F# applications to "native" or as close as possible and they ended up getting significant performance improvements.

For instance a ray tracing application when compiled to native had improvements of 50%.


C# as well via Blazor.


Yew, Seed, Percy


Hate to be that guy but with a such small SPA I was expecting a better file size, 136KB compressed with brotli 487KB uncompressed.

Being said that score in Page Speed Insights is pretty good (95).

Will be interesting how size increases with bigger/complex apps.


I wouldn't assume the growth in binary size is anything like linear with code size (i.e. there's probably a lot of fixed overhead). And they appear to be using the "--dev" compiler profile, which is unoptimized and meant for debugging.


I remember back when people generated html with native code using CGI.pm and it was considered a bad idea. Don't mix languages they said. The current CGI.pm deprecated all of the HTML generating parts in fact.

The official justification:

  The rationale for this is that the HTML generation functions of CGI.pm are an obfuscation at best and a maintenance nightmare at worst. You should be using a template engine for better separation of concerns. See CGI::Alternatives for an example of using CGI.pm with the Template::Toolkit module.


Can you load WASM modules dynamically like we in do in JS with import()?


In some sense, that's the only thing you can do in the browser, as you have to execute JS to instantiate the wasm.

The code in this post actually uses import to dynamically import said JS that imports the wasm.


Right, I guess the tooling would need to support code splitting into separated wasm modules like we do with Webpack or Rollup.


Am I the only one that finds this REALLY verbose for doing some really simple stuff?

Forewarning, I'm a JS dev trying to get out of the "JS" box. I'm super interested in WASM for a lot of reasons (running ML models on the client in python, etc).

This syntax (and maybe it's just Yew) feels like it has SO much boilerplate. For you seasoned rustacians out there, is this something that could be reduced? It feels overkill for me. But then again, maybe I'm a spoiled JS dev with the wrong expectations.

Really curious to answers on this.


It's hard to say without knowing what specifically you find to be verbose. "verbose" is a very subjective measure.


Frameworks like this and other similar non-standard approaches like Elixir Phoenix and LiveView are awesome, however, I'm still lost as to how one should go about styling and / or dynamically structuring styling?

Does anyone have recommendations for guides or simple approaches that only focus on the pure UI side of the problem. I've never really "had" to learn JS and honestly every time I have to dip my toes in it leaves a very bad taste in my mouth.


What is the size of the served wasm binaries?


Looking at the linked demo site[0] the wasm file is 476kb (133kb compressed with brotli).

[0] https://rustmart-yew.netlify.app/


What are the debugging tools like for this?


You build your website in Rust, I build 3 websites in that same time with Python/JavaScript.


In which time I build 30 websites using wordpress. Heaven forbid the author does something for any reason other than productivity gains


136kb feels heavy for what this is


A lot of that comes from a few large libraries needed to provide base runtime functions (like memory allocation).


Anyone know when all browsers will support 64bit wasm and allow >4gb memory?


Is blocking WASM a thing yet? I guess I wouldn’t know if I’ve launched a WASM page, but I was curious if WASM was popular enough for IT security folks to block on enterprise computers.


Not to my knowledge. I don't see any reason for it to be blocked, even in theory. It's not like you get access to a more permissive API surface or anything. If someone is going to feed you malicious code to execute, all WASM will do is make it harder to inspect.


I've been hearing good things about zig, and someone mentioned that zig has better wasm support than rust, is it true? I wish rust had a js ecosystem too ...


Cannot wait for the days where i no longer have to write JS


That's interesting. But, IMO it has a lot of room for improvement to compete with frameworks like React, Vue, and Svelte.

Isn't it?


Thanks for this. I was wanting to look into this, and this page is way more than enough to get me started.


Sorry for being the pedantic one, but the article early on states "without writing a single line of JavaScript" and then, in the first code, after showing configuration files, one can read

    <script type="module">
      import init from "/wasm.js";
      init();
    </script>
So two lines of JavaScript ...


Yes, two of them instead of a single one. Promise kept!


And some HTML as well. Yew seems to use a JSX-like template language, but it's got to hit the DOM eventually. Then you're gonna need to handle Events and style with CSS. How do you handle errors? What does all this look like in DevTools? People act like JS is the worst part of web development which is simply naive.


About HTML they made no claims. For a SPA HTML also is essential (they could use GL if they don't want to use (lots of) HTML)

And errors and events are handled in the Rust code.


Ah, but that wasn't written, it was cut'n'pasted, which Doesn't Count. Or does it?


Answer A: For a joke it doesn't matter.

Answer B: By that logic it is "without writing any code"

But yes, the example is nice and it's not a production line, probably not even copyrightable.


> without writing a single line of JavaScript

Looks like 680 lines to me:

https://rustmart-yew.netlify.app/wasm.js


You do not write that JavaScript, it is generated for you by the tooling.


If the end solution relies on hundreds of lines of JavaScript... its JavaScript. I will just write the whole thing in JavaScript instead of messing with something like this.


Sure, and you are very welcome to do so. That's not what the original post said, nor what you said in response, though.


That's auto generated and will shrink more and more in the future as WASM gets more abilities to directly call into DOM APIs and store JS objects.


There is no indication that will ever happen.


The wasm on this page is 476 KB.


I was hoping "single page" meant all the code fit on one page.


Whats the point. Javascrupt cannot have memory errors and has no threading, so why bother with a memory safe language like Rust?


Well for one Rust has a very nice type system


Why create web apps in Rust? Won't that slow down development?


Depends who the developers are. My company has been acquired several times over the past decade and it has been for our backend. The frontend has been largely left to rot and is a seldom used nightmare at this point. Our backend devs that use Rust may be asked to build a simple SPA for a client relatively soon, so you can understand why they _may_ be happy to give this a try for the POC. My initial impression is that it would require a lot less KT/overhead to at least give them this option before shuffling another team in that really doesn't have the roadmap bandwidth to begin with.


Fair point. I just don't know that right now at the beginning will be a good time. A solid ecosystem can make all the difference.


I eagerly await every single NPM package being rewritten in every single language just so someone can post a ShowHN about it.


It’s demo day everyday like kindergarten.


zero overhead on the web or just zero and overkill for the web?

Seriously I don't get the appeal of doing frontend in Rust? Isomorphic you say, thought Json solved that problem? Hell, I'd rather choose typescript even.


So as I understand it Rust is compelling because it is a safer alternative to C++ ( and sometimes C but mainly a C++ replacement ).

We wouldn't usually create a single page app in C++ right? So why would we want to do that in Rust ( other than, "just because" ). Right tool for the right job and all that.


Rust is much more like Ruby than C++ while retaining the characteristics of C++/C. The build system and ecosystem is also excellent, something that you cannot say for C++.

In terms of WASM Rust and C++/C are in the same boat i.e. the few languages you can realistically use because they don't require a heavy runtime or GC.


Rust is also a safer alternative to Javascript (and even Typescript for that matter).



No wasm stuff here though.


I have never understood why people find UI challenging, aside from accessibility and security.

State management is ridiculously easy. So easy that you are going to 10x greater effort reading your favorite framework’s API.

To manage state store your state criteria in a central object. Save that to storage somewhere, such as localStorage. To restore state read the stored state object and apply each item to its respective area of the application.

Modules and components are ridiculously easy now too thanks to ECMAScript modules.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: