Well it works much better than the JVM one so they must be doing something right. I never saw processes GC lock up like JVM until I worked on a big Java project.
I could say the same thing with the names swapped. I have never really seen a JVM process stall more than a few hundred ms. The CLR process I work with stalls for multiple seconds every 30 minutes ish.
Edit: and by this I don't mean that one of us is wrong. I mean that it might depend more on the application and its memory allocation patterns than the runtime.
btw. you can't compare the two. dotnet works way closer to the metal than java does. that's due to better interop and also since span exists, which also has more/better control over unmanaged memory in a more accessible way.
also struct's exists which eliminates a lot of problems when memory constrained.
example most java date objects are classes and use way more memory than any available object/library in dotnet (which are using structs)
.NET, by putting all their efforts into just one GC, has absolutely reaped a lot of benefits in terms of working well without any fuss. .NET also has better ergonomics around memory management. For example, it has much clearer and stronger guarantees about what state the runtime environment will be in after an out-of-memory error. On the other hand, I'm sure that making these guarantees have at times constrained the things that .NET can do with its garbage collector.
Java, by making the GC a plugin, has certainly spread its efforts thinner. It's also turned the Java-facing side of the memory management subsystem into a somewhat scary and mysterious black box, since that very swappability means that the application itself can assume almost nothing about its behavior. On the other hand, if ops wants to put the effort into tuning it, Java lets them have a lot more ability to control an application's memory and performance characteristics in production. It also leaves a lot more room for boutique JVMs with their own special memory managers.
Which approach to prefer is, I think, as much a matter of personal or organizational values as it is about technical merit.
Is it a bad code smell (see the old refactoring book)? Yes. Can there be rationale why no one acts on it? Yes. Is that okay? Yes.
Code purity is not everything. It is a single metric. Maoni maintains this file for the .NET Framework, Silverlight, .NET Core and nowadays Mono. She and her group will have reasons.
And honestly, all accounts from her co-workers describe her as a wonderful person with incredible insights. They talk about her in deepest respect and friendliness. Calling her crazy is the opposite of reality.
You can't attack someone like that here and if you do it again we will ban you.
Attacking unorthodox preferences in programming is bad enough. Chuck Moore and Arthur Whitney like to see their entire programs in one screen. Most people consider that crazy but they are two of the greatest programmers of all time. If someone else likes a single long file, that's their preference. Here's what we should do: stop calling people crazy and stop punishing differences. We need more outliers in programming. The reflex to knock people down when they deviate is a huge limiting factor in this business.
I only know her from the talks she has given and the blog posts, she really looks wonderful person technically and privately, and a very good example to show to girls with technical aspirations.
I saw her interview and she was just so awesome and confident. She grew up in China, graduated with a degree in Chemistry, learned to speak fluent English, and then somehow ended up being the sole maintainer (using the term loosely because she was actively improving it the whole time) of a mission critical piece of code for an entire ecosystem at Microsoft. Color me impressed. Definitely a top notch engineer regardless of gender full stop.
Well, her github profile pic is "I ️ 35000 lines in gc.cpp". She's been working on it successfully for 15 years so I guess she's untouchable even within the org (I can't imagine the amount of stress she went through over the years). Aside from issues with a 35K source file, being a deep domain expert with a singular focus seems awesome.
Isn't that rant a bit over the top just for a large source file?
If the source file size matters you're really not using your editor right. With a modern editor it shouldn't really matter where something is, you'll always just directly jump to the symbols.
And that github source view isn't really that great anyways.
For any serious reading better clone it. Apparently the billions of VC capital weren't able to buy a proper source code xref setup where you can actually jump to symbols directly, which is like the most important feature for any code review.
I guess the maintainer was just not very interested in trivial submissions. Yes splitting up source files is trivial.
It's not trivial. Have you ever had to maintain a project across multiple versions? .NET is even crazier, it supports multiple versions across multiple projects with a lot of shared code that lives in different repositories.
If you split the file you cannot apply patches to older versions or different projects anymore. Merging will be a nightmare and the change history will, while not technically lost, be very hard to follow.
Try it for yourself. Make a git repository, a large file, two or three LTS version branches, and then split the file up. Now try to apply a patch you made in master to the LTS branches.
IMHO splitting to files is overrated anyway. "File" is such an abstract concept when the implementation can vary to the extent of anyones imagination.
For humans, tooling is the king. If your IDE can open windows into different parts of a file, set bookmarks, has excellent navigation and so on, then why obsess about representation in the file system?
The most compelling argument I can give for splitting is that it introduces a very obvious hierarchical structuring. Our brains are much better at working with relatively small numbers of higher-level abstractions than they are at thousands of fine-grained ones.
There is a balance of course, and over-splitting is a thing. But vehemently arguing that there is no inherent value is a bit of a stretch.
On the other hand, one of the main diseases I see in big code bases are overhierarchicalisations.
Going back even to the first discoveries of software modularity (Parnas et al) there was never any talk about trees tall enough to reach the moon. It's always been about two or three layers in the normal case. Anything else and we get lost in the vertical direction instead.
(Implicit: and the GC might already count as being in the bottom, third layer.)
That's the reason the .NET GC source code is still in one single massive .cpp file. I mean, look at it [1]. Github even refuses to display it.
[1] https://github.com/dotnet/runtime/blob/master/src/coreclr/sr...