Hacker News new | past | comments | ask | show | jobs | submit login

Switching from JS to Rust will not give you a noticeable improvement in runtime speed, especially since V8 is absurdly well-tuned. If your JS logic was the bottleneck it's a good sign that the code was poorly architected and Rust cannot help you with that.



Yew already seems to out perform most other frontend frameworks and view libraries[0]. More importantly it does this while still paying the serialising cost when moving from WASM to JS and back. This cost is set to go away when WASM to JS interop improves and will make Yew even faster.

0: https://github.com/DenisKolodin/todomvc-perf-comparison


Rust compiled to WASM is still vastly faster than JS (often even by about 10 times or more). But yeah you probably should be fixing your JS at that point if it matters that much.


Unless your claim is that WASM isn't actually faster than JavaScript - which I haven't personally verified but seems like a pretty shaky argument - you're not really making any sense.

When you've got 10,000+ component instances on a page, tiny bits of render logic add up. You can identify whether actual JS logic is your bottleneck (and to an extent, which JS is your bottleneck) through profiling.

The most common fix is to avoid calling render functions at all where possible. These are cases where the output of the render function will be identical to the previous output - which means React won't make any DOM changes - but where the render function will itself get called anyway. You can prevent this through better dirty-checks on props and state (shouldComponentUpdate, in React's case). Though if you're not careful, even those comparisons can become a limiting factor (not often, but sometimes). Immutable data structures like those in Immutable.js and pub/sub component updates like what MobX does can help ensure comparisons don't get expensive.

Another trick is to do expensive operations like mapping over a large array ahead of time, instead of doing it on every render. Maybe you even need to perform your array transformation in-place with a for loop, to avoid allocating-and-copying. This is especially true if you do multiple array operations in a row like filtering, slicing, reducing. Memoization of data that gets re-used across renders is a generally helpful pattern.

Another huge factor in this case is concurrency: JavaScript runs on the same browser thread as reflow and all the rest, meaning that all of this JS logic blocks even non-JS interactions like scrolling and manifests very directly as UI jank. React rendering cannot happen in a worker thread because React requires direct access to the DOM API (element instances, etc), and worker threads cannot share memory directly with the main thread; it's message-passing only (the upcoming React Concurrency project will help alleviate this problem, but doesn't directly solve it). Rust, on the other hand, can share memory between threads, meaning that in theory (assuming Yew takes advantage of this) renders can happen in parallel. Even if they don't, WASM already lives in a separate thread from the main DOM, which does mean it will probably incur some constant message-passing overhead, but it should never block reflow. And that would go a very long way towards preventing user-facing jank.

The average web app doesn't run into these problems, and usually they can be optimized around, but when you do run up against these limits any across-the-board speed improvement that raises the performance ceiling can reduce the amount of micro-optimization that's necessary, reducing the cost in developer time and likely improving readability.


That is exactly what I'm talking about. Everything you referenced is an architecture problem, not a JavaScript problem. You can write slow apps in JavaScript+React, you can write slow apps in Rust. Unless milliseconds matter (doubtful in browser-land) you won't get the kind of performance you want for free just by switching languages.


Milliseconds absolutely matter. Every React render cycle (the entire relevant subtree from top to bottom because again, it's all synchronous) that takes more than 16 milliseconds presents to the user as a dropped frame. In fact that 16ms also has to include any DOM updates and the resulting browser reflow, which you have much less control over and are usually more expensive than your JS, so really the window is smaller than that.

Also, many of the optimizations I listed above can make code less readable in small ways. Many of them are things you should not do eagerly (premature optimization is the root of all evil), and should only go back and do once you've identified a specific problem. If an across-the-board speed increase prevents them from ever becoming problems, that's a win.

If you're thinking I'm a JS-hater, you're wrong. I think JS is a good language and I love using it where it's appropriate. But there are some usecases that benefit from a faster technology, and it's absolutely bonkers to try and argue that that technology shouldn't exist because "you can still make something slow with it if you really try".


16ms with 60fps targets, but there are higher frequency displays hitting the markets so that target might shift to 8ms or lower in the future.


60 fps is rarely needed in web apps.


<60fps results in a noticeably worse user experience, especially when scrolling. "Need" is relative, but I was really just trying to make the point that milliseconds matter: even if we cut that in half to 30fps, that's still 32ms maximum per render cycle.


What scenario can you think of where you have state updates while the user is scrolling?


Here are a couple cases we ran into:

1) We had a querying interface that would allow the user to construct a query and then it would return potentially thousands of results, asynchronously, over a websocket. We only displayed the first 300 on screen at a time, but sometimes the updates would come in so rapidly during that first 300 that one render wouldn't be finished before more results were available and the next render triggered. Things would get really backed-up and the UI would hang for multiple seconds at a time. So we decided to throttle the rendering - get the results as fast as possible, but only render once every 500ms or something. But during this time the user still might want to scroll around and look at other parts of the page, so we didn't want it to pause for 100ms every 500ms. It was a constant battle to keep things responsive while all this was going on.

2) We had another screen that would load another massive list (thousands and thousands) of entities as soon as you visited. These also came in gradually over time to spare the user from waiting for the last result before they could see the first. Similar deal: we throttled, but we needed to keep things responsive even during that throttling because the user would be scrolling up and down the results as they were coming in.


Both of these sound like side-effects of trying to render massive amounts of UI artifacts that the user doesn't actually need or want to see. Whether you use JS+React or Rust+Yew you're going to get yourself into trouble with firehose-style data handling.


Our users specifically demanded to see this much data. They actually complained that we couldn't show more. Once again you're making bold claims about a subset of use-cases that you clearly have no context for commentating on.

I never claimed that switching languages would have magically solved all our problems, but in our case it could've been a significant boon to performance which could've been one factor of many that contributed to a solution, and I just can't figure out why you have such a problem with that idea.


I'm an old 3d graphics programmer. One of performance rules is only render what will be displayed in that frame because rendering is expensive. That is why there are various culling algorithms, some are quite complicated. The reward for implementing all these advanced culling algorithms is very fast frame renders that give you more time to draw special effects, draw more characters, etc in a single frame.

I am a data engineer now and sometimes have to build web interfaces for scrolling large data tables. Just like in the 3d graphics world, the trick to responsiveness is culling. Only download and display the amount of data that the user will see at a given time, plus some overhead so the user never sees gaps when scrolling.

While Rust may have solved the problem in #1, it is unlikely to solve the problem in #2 because you are also fighting the network where Rust is useless. Need culling.


Unfortunately because web content is automatically positioned/sized in a way that's (mostly) opaque to JavaScript code, this is a pretty hair problem (compared to graphics where you have all the numbers on-hand). So most of the time you'd either have to a) try and replicate the math the layout engine ends up doing which makes things more brittle, or b) directly measure the computed dimensions/positions/visibility/scroll position of different elements, which tends to be a pretty large performance liability and in many cases would eliminate any gains you might receive.

There are some edge cases where these techniques can work - like if you have a really simple layout, and you can give a fixed-height to every item (table row or otherwise), and you assume the user is never going to resize their browser window - but it isn't nearly as obvious of a win as it is for 3D rendering and we just never decided it was worthwhile to try going down that rabbit-hole.


> Unfortunately because web content is automatically positioned/sized in a way that's (mostly) opaque to JavaScript code, this is a pretty hair problem (compared to graphics where you have all the numbers on-hand).

Exactly the same for 3d graphics. The auto-positioning is different but conceptual similar.

Yes B is my recommended approach. Since we are talking about performance, we should really be putting limits on what fast and slow mean. That is because it doesn't really matter if something is fast or slow, what matters is the performance difference between different approaches.

For B on average, performance will in the microsecond range, DOM rendering will be in the millisecond range, and the network is >100ms to seconds range. It is unlikely that B will be equal or slower to brute force rendering everything. Those X microseconds spent figuring out the culling window saves Y milliseconds rendering and saves Z milliseconds to seconds querying the network.


It's not that I have a problem with the idea, it's that I've spent most of the last decade building data visualization apps on the web and have never truly hit a wall with respect to JavaScript's performance characteristics. I think of it like this: no user is visually processing data faster than React can render it. If you render 300 elements and then want to render more updates before those first 300 had finished, did you need to render those first 300 at all? A human can't visually process hundreds of elements that quickly. The problem is that you're attempting to render a bunch of data that's not actually what your user is trying to see, not that React can't render it fast enough.

Most "performance" problems on the web are solved by simple techniques like virtualization, pagination, pushing expensive calculations to the server, avoiding round-trip requests, or basic UI/UX improvements. The only time I would care about JavaScript's actual runtime performance if I were doing something crazy like 3d rendering in the browser and that's a completely different set of problems than React or Yew are set up to handle.


The constraints on data visualization are not the same as the constraints on an IDE, which is closer to what we were building. Also, our users could definitely "process" the data faster than it could be rendered, because what they were doing most of the time was skimming through it for points of interest. They were not sitting and deeply considering each data point one at a time; they could glance at a page and know in a moment whether what they were looking for was there, or whether they needed to scroll further, or whether they needed to tweak the query and re-run it or click a link to a different screen. It was muscle-memory for them, and the tiniest hitch was frustrating to the user experience.


60 fps should be the bare minimum given the kind of hardware we run these days.


No it should based on the need of the use case, as always. Very few websites, even highly dynamic ones that require React, need to be rendered at 60 fps.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: