Hacker News new | past | comments | ask | show | jobs | submit login

> it's not any more expensive than allocating a new object in Javascript.

so you mean bloody expensive ? seriously, if everybody thinks like you no surprise the current web crawls on beastly computers.




Idk about JS's memory model, but you can allocate the equivalent of a JS object in Java and Haskell very, very cheaply. I really don't think allocating a single JS object is expensive...updates to large immutable data structures should just require a few allocations (aka a handful of pointer bumps). Sure, it's technically more expensive than an in-place update to an equivalent large mutable data structure. But it's also not a fair comparison given one gives you way stronger guarantees about its behavior.


Except in those languages it can be just as brutally painful to allocate. Start modifying strings in the render loop on Android and see how quickly you get destroyed by constant GC pauses.

The only way to address it is with extensive pooling and heuristics.

Or you can just mutate.

Really it's no wonder that the web is so slow if the common conception is that allocating is no big deal. If you really want to do performance right allocations should be at the top of your list right next to how cache friendly your data structures are.


Because in reality it is no big deal. Modern GCs are incredibly efficient.

When a GC allocates memory all it does is check if there is enough memory in the "young generation" if yes it will increment a pointer and then it will just return the last position in the young generation. If there is not enough memory in the young generation the GC will start traversing the GC root nodes on the stack and only the "live" memory has to be traversed and copied over to the old generation. In other words in the majority of cases allocation memory costs almost as much as mutating memory.


There is a huge difference between “algorithmically efficient” (aka big o) and “real life efficient” (aka actual cycle counts). In real life, constant factors are a huge deal. Real developers don’t just work with CS, they work with the actual underlying hardware. Big O has no concept of cache, no SMP, no hypertreading, no pipeline flushes, no branch prediction, or anything else that actually matters to creating performance libraries and applications in real life.


There is a huge difference, you're also discounting GC. Haskell's GC for example is tuned to the aspects of the language, meaning it's pretty efficient at allocating and cleaning up lots of memory; it has to be, everything is immutable.

Mutable language GC's like JS or Java's aren't necessarily built for this compared to GHC. And even discounting all that, things like stack memory, cache, etc all make a huge difference in real world performance. GC's have come a long way, but there is still a gap in performance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: