Hacker News new | past | comments | ask | show | jobs | submit login
The original .NET garbage collector was written in Common Lisp (twitter.com/rainerjoswig)
221 points by tosh on May 25, 2020 | hide | past | favorite | 66 comments



> was written in Common Lisp and automatically translated to C++

That's the reason the .NET GC source code is still in one single massive .cpp file. I mean, look at it [1]. Github even refuses to display it.

[1] https://github.com/dotnet/runtime/blob/master/src/coreclr/sr...


Well it works much better than the JVM one so they must be doing something right. I never saw processes GC lock up like JVM until I worked on a big Java project.


I could say the same thing with the names swapped. I have never really seen a JVM process stall more than a few hundred ms. The CLR process I work with stalls for multiple seconds every 30 minutes ish.

Edit: and by this I don't mean that one of us is wrong. I mean that it might depend more on the application and its memory allocation patterns than the runtime.


btw. you can't compare the two. dotnet works way closer to the metal than java does. that's due to better interop and also since span exists, which also has more/better control over unmanaged memory in a more accessible way. also struct's exists which eliminates a lot of problems when memory constrained. example most java date objects are classes and use way more memory than any available object/library in dotnet (which are using structs)


Which JVM one? CMS? Parallel? G1? ZGC? Shenandoah?


You're right, its getting better. Main problems were older JREs, G1 in 11 is better but still locks up. We haven't used ZGC in prod yet.

The main thing is dotnet had this figured out 10 years ago while Java is still struggling.


There are two sides to that coin.

.NET, by putting all their efforts into just one GC, has absolutely reaped a lot of benefits in terms of working well without any fuss. .NET also has better ergonomics around memory management. For example, it has much clearer and stronger guarantees about what state the runtime environment will be in after an out-of-memory error. On the other hand, I'm sure that making these guarantees have at times constrained the things that .NET can do with its garbage collector.

Java, by making the GC a plugin, has certainly spread its efforts thinner. It's also turned the Java-facing side of the memory management subsystem into a somewhat scary and mysterious black box, since that very swappability means that the application itself can assume almost nothing about its behavior. On the other hand, if ops wants to put the effort into tuning it, Java lets them have a lot more ability to control an application's memory and performance characteristics in production. It also leaves a lot more room for boutique JVMs with their own special memory managers.

Which approach to prefer is, I think, as much a matter of personal or organizational values as it is about technical merit.


What about throughput?


The file is 38k lines, in case anyone wondered


The translator seems to be the most interesting part of it, but which one is it?


Haha the only comparable file I'm aware of is UIView.m, which I believe is 900kb


[flagged]


Is it a bad code smell (see the old refactoring book)? Yes. Can there be rationale why no one acts on it? Yes. Is that okay? Yes.

Code purity is not everything. It is a single metric. Maoni maintains this file for the .NET Framework, Silverlight, .NET Core and nowadays Mono. She and her group will have reasons.

And honestly, all accounts from her co-workers describe her as a wonderful person with incredible insights. They talk about her in deepest respect and friendliness. Calling her crazy is the opposite of reality.


It's Maoni and she is great. GP is unreasonably personalizing a formatting preference.


Thanks. I corrected her name. What culture influenced reading can produce :)


[flagged]


You can't attack someone like that here and if you do it again we will ban you.

Attacking unorthodox preferences in programming is bad enough. Chuck Moore and Arthur Whitney like to see their entire programs in one screen. Most people consider that crazy but they are two of the greatest programmers of all time. If someone else likes a single long file, that's their preference. Here's what we should do: stop calling people crazy and stop punishing differences. We need more outliers in programming. The reflex to knock people down when they deviate is a huge limiting factor in this business.


Please reread your comment and think about your attitude.


I only know her from the talks she has given and the blog posts, she really looks wonderful person technically and privately, and a very good example to show to girls with technical aspirations.


I saw her interview and she was just so awesome and confident. She grew up in China, graduated with a degree in Chemistry, learned to speak fluent English, and then somehow ended up being the sole maintainer (using the term loosely because she was actively improving it the whole time) of a mission critical piece of code for an entire ecosystem at Microsoft. Color me impressed. Definitely a top notch engineer regardless of gender full stop.


Well, her github profile pic is "I ️ 35000 lines in gc.cpp". She's been working on it successfully for 15 years so I guess she's untouchable even within the org (I can't imagine the amount of stress she went through over the years). Aside from issues with a 35K source file, being a deep domain expert with a singular focus seems awesome.

https://github.com/Maoni0

https://www.youtube.com/watch?v=ujkSnko0JNQ


Isn't that rant a bit over the top just for a large source file?

If the source file size matters you're really not using your editor right. With a modern editor it shouldn't really matter where something is, you'll always just directly jump to the symbols.

And that github source view isn't really that great anyways. For any serious reading better clone it. Apparently the billions of VC capital weren't able to buy a proper source code xref setup where you can actually jump to symbols directly, which is like the most important feature for any code review.

I guess the maintainer was just not very interested in trivial submissions. Yes splitting up source files is trivial.


> Yes splitting up source files is trivial.

It's not trivial. Have you ever had to maintain a project across multiple versions? .NET is even crazier, it supports multiple versions across multiple projects with a lot of shared code that lives in different repositories.

If you split the file you cannot apply patches to older versions or different projects anymore. Merging will be a nightmare and the change history will, while not technically lost, be very hard to follow.

Try it for yourself. Make a git repository, a large file, two or three LTS version branches, and then split the file up. Now try to apply a patch you made in master to the LTS branches.


IMHO splitting to files is overrated anyway. "File" is such an abstract concept when the implementation can vary to the extent of anyones imagination.

For humans, tooling is the king. If your IDE can open windows into different parts of a file, set bookmarks, has excellent navigation and so on, then why obsess about representation in the file system?


The most compelling argument I can give for splitting is that it introduces a very obvious hierarchical structuring. Our brains are much better at working with relatively small numbers of higher-level abstractions than they are at thousands of fine-grained ones.

There is a balance of course, and over-splitting is a thing. But vehemently arguing that there is no inherent value is a bit of a stretch.


On the other hand, one of the main diseases I see in big code bases are overhierarchicalisations.

Going back even to the first discoveries of software modularity (Parnas et al) there was never any talk about trees tall enough to reach the moon. It's always been about two or three layers in the normal case. Anything else and we get lost in the vertical direction instead.

(Implicit: and the GC might already count as being in the bottom, third layer.)


Also, considering the source's long history it would be a huge break for stylistic reasons.


I just copy-pasted it into SublimeText to take a peek at it, and it doesn't even register as particularly big, everything being instantaneous.


Calling someone "crazy" because you disagree with their code style is inappropriate.


>She's actually rejected community pull requests to break it up

Did she? The original pull request to split up that source file was closed by someone else with summary justification: https://github.com/dotnet/coreclr/pull/403

I think you're oversimplifying the problem here, which may be in great part institutional.


I'm not sure this was terribly uncommon at the time. My work for a thesis was prototyped in Scheme and then ported to C once I was sure I had all the kinks worked out. And I know I wasn't the only one who did that.

I found it to be a much more efficient way to work iteratively, because software that's written in a lisp is fundamentally more amenable to change than software that's written in a language like C.

Today, not so much, because the gap has closed a lot. IDEs and improvements to the ergonomics of lower level languages have made them a lot more convenient to work in end-to end, and cheap memory and compiler improvements have made it a lot more feasible to just ship code that was written in a higher level language.


I've heard a fair amount over the years of Lisp as a great code generating language. I wonder what sort of thinking took place in this intersection between writing in a macro language and imagining what the generated language would do, and then the assembler...


Can you elaborate on the part "Lisp as a great code generating language"? Just curious. I tried to learn Scheme for SICP but then switched to the Python version.


Lisp's big strength is that since the program itself is represented in the same form as the data structures, you can very easily generate executable programs by just slapping lists together. I'd recommend going back to the Scheme version-- I initially didn't understand the cool factor of Lisp either, until I used it for a while (and found macros).


I definitely recommend you to finish the Scheme-based SICP. Plenty of stuff in the original SICP like metalinguistic abstraction is just awkward or impossible in Python.


If python didn't have with blocks, I could replicate the behavior by manually writing out a try/except/finally. I cannot write something to do that for me automatically (ignoring eval/exec). Where functions allow abstraction over a sequence of instructions, code generation allow abstraction over syntax, allowing the invention of new language features.


To repeat: really study the SICP version, it's so good. https://sarabander.github.io/sicp/


Try Elixir, it supports macros. Also Rust has Macros too. Although Lisp macros is different and may be more old fashion in some aspects, but it won't be hard to get the idea of code generating language, just pick one and try it.


There is no Python SICP. It pains me to see you waste your time with someone's inferior regurgitation/butchering of material written and presented by absolute geniuses that should be considered among the highest forms of art produced in the field.


Here are some articles on memory performance issues by the current maintainer of the 35,000 line GC file:

https://devblogs.microsoft.com/dotnet/work-flow-of-diagnosin...


Always amazed by the skills of people who can write language translators without much difficulty.


It only looks hard until the very first compiler design class, then it is a kind of mechanical exercise.

What requires expert level are the error messages, the quality of generated code and runtimes performance.


yes, excepting that many have trouble with the compilers class. if you get through it and do well you're probably fine.


Compiling from lisp is even easier because parsing is trivial.


Until the year before the one I took compilers design, the choice of programming languages for the implementation was free, except for any Lisp or Prolog derived languages weren't allowed.

The responsible professor considered the final project would be too easy for anyone using them, and he was kind of right.


One would say Compilers is a multidisciplinary subject and I think it is fine that some parts (e.g. parsing) might be boring for some, and some other parts (e.g. optimizations, codegen) might be exciting :)


I would be very interested to understand how the translation to C was done. I would love to prototype like that. The CL to C translators that I know produce code that is much more complicated. This is extremely clean in comparison.


I guess with a one-off translator you can spend more time on the patterns you know you have to translate well instead of having to handle every single possibility that could potentially come up in code. At work we can do something similar with our C#-to-everything-else compiler which handles a subset of C# 5, since for many things it's easier to just change the single line in C# than spending a day implementing a complicated feature.

The other reason the code is somewhat clean (albeit long) is probably that there's been 20 years of work on it. And it probably wasn't that long in the beginning.


I already knew that but defining such a subset that it is at the same expressive and flexible enough is a challenge by itself. More even so if it is used to build a GC. Now I am curios: can you provide some more details about what C# subset are you using?


Well, it's perhaps not only the language subset, but also part of the standard library you have to provide or re-implement. I don't know Lisp too well, but I'm sure there are parts of it that are not necessarily mandatory for creating a prototype of a garbage collector.

As for C#, the things we don't support are:

- yield statements: codegen for that is not fun to do on your own and hand-writing an IEnumerator is annoying, but not too bad. And back when we started we couldn't get the generated code from Roslyn, maybe there's a way by now to do that.

- async/await (could nowadays be converted into pretty much the same in JS at least, but Task/Promise/CompletableFuture work slightly differently anyway and there aren't that many places in the codebase that need this, so they can be patched)

- dynamic (no reason to use it, thus no reason to support it).

- Some generic constraints, e.g. new(): We currently emit JS, TypeScript, Java, and Python, along with a number of internally-relevant output formats like documentation – and .NET generics already don't map well to Java generics (or vice-versa), and are mostly useless for JS and Python, anyway.

- goto, unless for fall-through in switch

Most “normal” things like types/members/statements/expressions do work properly and the subset isn't too bad working in (it's also not a particularly small subset, of course). A lot of additional work comes in getting the necessary parts of the BCL to work on the other side of the conversion, of course, but that is a separate concern next to language features.


Apparently the conversion was used for a prototype and then it was rewritten. [0]

[0] https://twitter.com/Suchtiman/status/1265152818131894272


Never let the truth get in the way of a tall LISP story. Not to cast shade on the original feat which is a great achievement, but let's face it, there are many LISP proponents somewhat prone to hyperbole.


Interesting how Lisp is used in production


Kind of production. It was initially written in Lisp, then translated to C++ to go to production and maintained / extended from there.

That's a great way to prototype -- use what you know and then rewrite (or in this case transpile?)


However unlike most prototypes, this one wasn't thrown away, but rather transpired into production code.


> unlike most prototypes, this one wasn't thrown away

I wish...


"It's working code so deploy it now. You can pretty it up later."

Yeah, I've heard that one too many times. It amazes me how much that experience changes how a developer prototypes.


I'm reminded that I once read of someone using the Napkin look-and-feel for Java Swing as a way of managing customer expectations.

http://napkinlaf.sourceforge.net/

Edit Here we are: https://headrush.typepad.com/creating_passionate_users/2006/...


You use the tools you know best.

The blog author worked for TI on their Lisp Machine and for Lucid.


Why? I.e. why not some sort of Scheme?

I absolutely don't meat starting a "holy war", the question is intended purely for sake of learning more about Common Lisp, what features does it have which make it a right tool for this job in particular.


20 years ago, Common Lisp was a mature, stable environment. Scheme was (and is) much less stable. Even if Scheme were just as mature and stable, I'd guess that Common Lisp would still be an easier choice if the output target was C++.

Scheme is rather functional by nature and has a core reliance on things like proper tail calls and continuations. These require elaborate, whole-program compilation so they can be optimized away. A side effect of that kind of in-depth manipulation will necessarily be output code that is far removed from the layout of the original source (making it harder to understand what is happening or hunt down bugs).

Common Lisp can do functional things, but the core of the language is much more imperative. You'll wind up writing looping constructs instead of tail calls (tail calls aren't actually part of the spec though some/most implementations support them). CLOS can probably map onto C++ classes without too much trouble as well. These lower-level constructs make it easier to reason about performance.

That alone isn't everything. Common Lisp allows you to provide type hints and type specifiers to the compiler. Aside from the speedups they allow in CL, they would certainly make things easier when translating to C++. If that isn't enough of a boost, basically every major CL implementation allows you to define an inline assembly block. These things certainly could be done in Scheme (for example, typed racket), but the reader and unhygenic macros in CL certainly make it easier.


They had people who knew Common Lisp and garbage collection.

Plus Scheme has no batteries included.


But the target language in this case (C++) is also light in terms of batteries. It is most likely that CL was at their disposal and Scheme wasn't.


I wonder when will .NET get a SOTA low pause time collector like openjdk has with both ZGC and Shenandoah Seems even more critical as C# is widely used for games


Most of .NET is moving towards avoiding heap allocations altogether, the latest versions improved a lot in that sense. A great example is the new System.Text.Json API, that can serialize/deserialize JSON documents up to 1 MB with 0 heap allocations, using ArrayPool


Which is great for Greenfield projects with engineers that keep up to date with these latest developments.

If one chooses to pour time into improving the GC instead of the language, the benefits are reaped both by existing projects and new projects by old-school developers.


Some of these changes are happening in the runtime itself, so many projects will get benefits just by updating to a newer version (something you would have to do anyway to get GC improvements).

For example, System.Text.Json is the default in ASP.NET Core 3.x, updating to 3.0 would use it instead of Json.NET unless you explicitly opt-out.

In the same way many APIs are starting to accept Span<T> in addition to IEnumerable<T>, Lists or arrays, so it's easy to opt-in into that. Other APIs are using it internally to avoid copies, so you don't need to do anything to get benefits.


Hmm. What was collecting the Common Lisp's garbage then?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: