it's not any more expensive than allocating a new object in Javascript. possibly less. if you want immutable models, allocation will most likely happen on mutation.
Idk about JS's memory model, but you can allocate the equivalent of a JS object in Java and Haskell very, very cheaply. I really don't think allocating a single JS object is expensive...updates to large immutable data structures should just require a few allocations (aka a handful of pointer bumps). Sure, it's technically more expensive than an in-place update to an equivalent large mutable data structure. But it's also not a fair comparison given one gives you way stronger guarantees about its behavior.
Except in those languages it can be just as brutally painful to allocate. Start modifying strings in the render loop on Android and see how quickly you get destroyed by constant GC pauses.
The only way to address it is with extensive pooling and heuristics.
Or you can just mutate.
Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.
Because in reality it is no big deal. Modern GCs are incredibly efficient.
When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.
There is a huge difference between “algorithmically efficient” (aka big o) and “real life efficient” (aka actual cycle counts). In real life, constant factors are a huge deal. Real developers don’t just work with CS, they work with the actual underlying hardware. Big O has no concept of cache, no SMP, no hypertreading, no pipeline flushes, no branch prediction, or anything else that actually matters to creating performance libraries and applications in real life.
There is a huge difference, you're also discounting GC. Haskell's GC for example is tuned to the aspects of the language, meaning it's pretty efficient at allocating and cleaning up lots of memory; it has to be, everything is immutable.
Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.
React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object). In React tutorial[1], you can notice the use of the `Object.assign` pattern which perform a new allocation.
However, most JS runtime have a generational GC so an allocation isn't remotely as costly as an allocation in C or Rust.
> React itself does not allocate a new object, but it forces you to do it yourself: to update the application state, you're supposed to call `setState()` with a brand new state (which is a newly allocated object)
Afaik you don't have to. It just makes things easier since state changes can be detected through shallow reference comparisons. There is even the shouldComponentUpdate() hook to allow the user to detect state changes which did not trigger a whole state object change.