Hacker News new | past | comments | ask | show | jobs | submit login

Probably for performance reasons, allocating a whole new state each frame is pretty expensive.



Incremental updates to large, immutable data structures do not require allocating a whole new state per update (despite maintaining immutability!) Mutation is still frequently more efficient though, at the cost of not being able to reason immutably about your program. But if you're doing something that requires multiple versions of your state (e.g. history, undo/backtracking), the immutable version is going to be more efficient and definitely easier to write efficiently in the first place.


I'll give you that it'll be easier to write but once you start working with value types(like Rust has proper support for) then you don't get the nice efficiencies that come when everything in the world is a reference.


Most efficiency gains of imperative programming languages like C or C++ come from the compact and continguous memory layout and usage of the stack which is usually in the L1 cache.

In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.


> In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.

It's not the same, you need to allocate new memory, if it's on the heap then (depending on your allocator/OS etc) you will be making a syscall, which means context switch, cache eviction etc.


This is where understanding the difference between theory and practice is important, by its nature anything in the heap is not going to be in L1 cache.

That means a difference of ~300 cycles before you can even start working with the cache line you pull from main memory.


But the state is likely a tree structure and there will be lots of sharing if coded right.

I think frameworks like React should explicitly demand that state and props will be referentially compared to optimize the rendering.


it's not any more expensive than allocating a new object in Javascript. possibly less. if you want immutable models, allocation will most likely happen on mutation.


> it's not any more expensive than allocating a new object in Javascript.

so you mean bloody expensive ? seriously, if everybody thinks like you no surprise the current web crawls on beastly computers.


Idk about JS's memory model, but you can allocate the equivalent of a JS object in Java and Haskell very, very cheaply. I really don't think allocating a single JS object is expensive...updates to large immutable data structures should just require a few allocations (aka a handful of pointer bumps). Sure, it's technically more expensive than an in-place update to an equivalent large mutable data structure. But it's also not a fair comparison given one gives you way stronger guarantees about its behavior.


Except in those languages it can be just as brutally painful to allocate. Start modifying strings in the render loop on Android and see how quickly you get destroyed by constant GC pauses.

The only way to address it is with extensive pooling and heuristics.

Or you can just mutate.

Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.


Because in reality it is no big deal. Modern GCs are incredibly efficient.

When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.


There is a huge difference between “algorithmically efficient” (aka big o) and “real life efficient” (aka actual cycle counts). In real life, constant factors are a huge deal. Real developers don’t just work with CS, they work with the actual underlying hardware. Big O has no concept of cache, no SMP, no hypertreading, no pipeline flushes, no branch prediction, or anything else that actually matters to creating performance libraries and applications in real life.


There is a huge difference, you're also discounting GC. Haskell's GC for example is tuned to the aspects of the language, meaning it's pretty efficient at allocating and cleaning up lots of memory; it has to be, everything is immutable.

Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.


React does not allocate a new object in Javascript. No sane framework does.

"Immutable" data structures are not just objects that never get mutated, they're different manners of organizing the bytes.


React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object). In React tutorial[1], you can notice the use of the `Object.assign` pattern which perform a new allocation.

However, most JS runtime have a generational GC so an allocation isn't remotely as costly as an allocation in C or Rust.

[1] https://reactjs.org/tutorial/tutorial.html#why-immutability-...


> React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object)

Afaik you don't have to. It just makes things easier since state changes can be detected through shallow reference comparisons. There is even the shouldComponentUpdate() hook to allow the user to detect state changes which did not trigger a whole state object change.


Doesn't make sense. You don't provide an entirely new object when you call `setState()`, just the parts of it that have changed.

When you use `Object.assign()` then you're doing it, but you don't have to use that, and probably shouldn't.


I would argue that it is obviously worth it for the simplicity of being explicit what is the next state.

This kind of performance optimization, especially outside of game engines seem unnecessary.


This is Rust. Not exactly a language that makes compromises on performance for the sake of simplicity.


Each cycle you waste on mobile is milliwatts you're burning from the battery. Even if the user doesn't perceive it their battery will appreciate it.


You save a single allocation for every state change in your app? Never mind the actual render cycle? And lose a simple reference check to decide if you can short-circuit a part of your app?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: