The comments here are all discussing the ideas presented abstractly.
I'm more interested in the specifics:
> We use a lot of static methods and fields as to minimize allocations whenever we have to. By minimizing allocations and making the memory footprint as slim as possible, we decrease the application stalls due to garbage collection.
I would really love to see some profiling comparisons on this.
I simply cannot believe that just avoiding object initialization or state would add up to a meaningful difference. Any reasonable inversion of control framework applied dogmatically would pretty much ensure you almost never initialize objects at runtime anyway - only at service startup. BUT you maintain the clean composition and encapsulation of objects and gain the testability that they've given up.
I also can't believe that using fields directly over (i assume) Properties would make a meaningful difference either...
Again, not doubting the premise that if job one is performance that you have to make readability and maintainability compromises. But I am unsure if the examples presented here are actually relevant, or were they just easy to digest in a blog post.
My guess? They operated in crunch-y startup mode for many years, optimized feature delivery over all else (appropriate), and now have a messy kludgy codebase that - at least - remains performant. Now they're scared to refactor it and improve it list the performance gods smile unkindly, and have come up with ipso-facto justifications for why they don't want to bother with writing unit tests (which incidentally work JUST FINE with unit testing static methods)
It makes a difference. Back in my C# days, a team rewrote an internal service in a C-like fashion: structs, no properties, almost no runtime heap allocations. The application absolutely screamed. If I recall, it was 2 or more orders of magnitude faster than the app it replaced. Because it was highly used, this made a big difference, and enabled new use cases that weren’t possible previously.
You see the same thing in Go. Libraries that are zero-allocating are significantly faster than the competition.
Depends. Of course it is generally preferable to do no heap allocation but the latter is very cheap as well with good GCs, and the reclamation process is amortized/can be done in parallel for the most time. Eg. Java’s allocations are like 3 CPU instructions if I’m not mistaken?
Allocations are pretty cheap, but GC can be quite expensive under high load/big heap. It is probably not the first optimization you would need to make, but allocaitons have been a huge focus by the .net core team, and is part of the reason why it is so much faster than the .net full framework. See for example the post on improvements in dotnet 5: https://devblogs.microsoft.com/dotnet/performance-improvemen...
But that depends on the number of objects. So unless you are creating objects like there is no tomorrow in a hot loop, the GC should be able to keep up with your workload quite well.
GC will cost 50-100 instructions per object reclaimed AND 5-10 per object not reclaimed.
Even if we ignore the latter, gc will can add 2x for objects that you don't do much with.
And then there's the cache effects. Close to top-of-stack is pretty much guaranteed to be in L1 and might even be in registers. Heap allocated stuff is wherever. Yes, usually L2/L3 but that's at >2x the latency for the first access.
Is your instruction numbers from a generational GC? Not doubting it, but perhaps that can further amortize the cost (new objects that are likely to die are looked at more often).
Cache misses are indeed true, but I think we should not pose the problem as if the two alternatives are GCs with pointer chasing and some ultra-efficient SoA or array-based language. Generally, most programs will require allocations and those will cause indirections, or they will simply not loop over some region of memory at all. Then GC really is cheap and may be the better tradeoff (faster allocation, parallel reclaim). But yeah, runtimes absolutely need a way for value classes.
Generational affects the number of free objects that can be found, not the cost of freeing or looking at an object, or even the number of objects looked at.
I thought that the comparison was between "heap-allocate every object" vs "zero allocation." (C/C++ and similar languages which make it easy to stack-allocate objects, which is not far from zero-allocation.)
If the application is such that zero-allocation isn't easy, then that comparison doesn't make sense.
However, we're discussing situations when zero (or stack) allocation is possible.
The code is very simple and mostly consist of taking json, parsing it, checking a few fields and modifying them, and calling another service with another format of json.
Most of what we do is these null checks disguised as opionals.
And then people complain performance got worse compared to mainframe assembler.
What I find hilarious is that languages like Java & C# is the essence of Object Oriented Programming but if you want to keep things performant you should avoid creating objects.
Well, they might have tried an unreasonable inversion of control framework (like Spring) and observed that Spring uses reflection on top of reflection and does slow down your code significantly. The solution isn't to abandon IOC (or, god help us, just give up and declare all your variables static/global), it's to abandon IOC frameworks. They're useless and they don't provide any advantage. IOC is a coding style that you can follow without the help of some idiotic "framework", just like ORMs.
I'm more interested in the specifics:
> We use a lot of static methods and fields as to minimize allocations whenever we have to. By minimizing allocations and making the memory footprint as slim as possible, we decrease the application stalls due to garbage collection.
I would really love to see some profiling comparisons on this.
I simply cannot believe that just avoiding object initialization or state would add up to a meaningful difference. Any reasonable inversion of control framework applied dogmatically would pretty much ensure you almost never initialize objects at runtime anyway - only at service startup. BUT you maintain the clean composition and encapsulation of objects and gain the testability that they've given up.
I also can't believe that using fields directly over (i assume) Properties would make a meaningful difference either...
Again, not doubting the premise that if job one is performance that you have to make readability and maintainability compromises. But I am unsure if the examples presented here are actually relevant, or were they just easy to digest in a blog post.
My guess? They operated in crunch-y startup mode for many years, optimized feature delivery over all else (appropriate), and now have a messy kludgy codebase that - at least - remains performant. Now they're scared to refactor it and improve it list the performance gods smile unkindly, and have come up with ipso-facto justifications for why they don't want to bother with writing unit tests (which incidentally work JUST FINE with unit testing static methods)