Hacker News new | past | comments | ask | show | jobs | submit login
It takes a PhD to develop that (royalsloth.eu)
623 points by kaeruct on Oct 4, 2021 | hide | past | favorite | 411 comments



A few hours later another programmer came up with the prototype of a much faster terminal renderer, proving that for an experienced programmer a terminal renderer is a fun weekend project and far away from being a multiyear long research undertaking.

I have no idea if this is the case here, and I suspect it might not be, but pretty much every time I've seen a developer complain that something is slow and then 'prove' that it can be faster by making a proof-of-concept the only reason theirs is faster is because it doesn't implement the important-but-slow bits and it ignores most of the edge cases. You shouldn't automatically assume something is actually bad just because someone shows a better proof-of-concept 'alternative'. They may have just ignored half the stuff it needs to do.


This particular case was discussed at length on Reddit and on YC News. The general consensus was that the Microsoft developers simply didn't have performance in the vocabulary, and couldn't fathom it being a solvable problem despite having a trivial scenario on their hands with no complexity to it at all.

The "complaining developer" produced a proof of concept in just two weekends that notably had more features[1] and was more correct than the Windows Terminal!

RefTerm 2 vs Windows Terminal in action: https://www.youtube.com/watch?v=99dKzubvpKE

Refterm 2 source: https://github.com/cmuratori/refterm

One of the previous discussions: https://news.ycombinator.com/item?id=27775268

[1] Features relevant to the debate at any rate, which was that it is possible to write a high-performance terminal renderer that also correctly renders Unicode. He didn't implement a lot of non-rendering features, but those are beside the point.


And the experienced developer is Casey Muratori who is somewhat well known for being a very experienced developer. That makes it less likely that he doesn't know what he's talking about and is skipping over hard/slow features.


And he had a condescending tone from the beginning (as he always does). Maybe if he was more respectful / likable, the developers would have responded better.


Where? He started completely neutral here:

https://github.com/microsoft/terminal/issues/10362

We live in a time where every competent developer is slandered in public if he isn't fully submissive to the great corporate powers.


I think the comment below in the github thread sums up the attitude of the developer. It's definitely not a "neutral" attitude. It's somewhat chip-on-shoulder and somewhat aggressive.

  > Setting the technical merits of your suggestion aside though: peppering your comments with clauses like “it’s that simple” or “extremely simple” and, somewhat unexpectedly “am I missing something?” can be read as impugning the reader. Some folks may be a little put off by your style here. I certainly am, but I am still trying to process exactly why that is.

But by any reasonable reading, the guy wasn't "slandered".


Man, if we start taking issue with "Am I missing something?", how can we have productive, good-faith discussions? The only attitude I can associate with that is openness to learn, a genuine curiosity.

How is a yes/no question aggressive? At that point the maintainers had two possible responses:

1. Yes you are missing that ...

2. No that is the complete picture.

But they chose to side channel to a third possibility, "we are put-off by your questioning!". Excuse me what?


> How is a yes/no question aggressive?

Have you stopped beating your wife?

More relevantly, when the question is asked genuinely then - as you say - it's expressing an openness to learn.

Sometimes it is asked rhetorically, dripping with sarcasm and derision. In that case, it is clearly not furthering our interest in productive, good-faith discussions.

Far more often, it falls somewhere between those two and - especially in text - is often ambiguous as to which was intended. While we should exercise charity and hope our conversational partners do likewise, it makes sense to understand when some phrases might be misconstrued and perhaps to edit accordingly.


If you're going to read emotional content into that "Am I missing something?", I think sarcasm and derision are not the most plausible options. In this case, it seems like incredulity is the more likely and appropriate reaction: because it seemed like the person asking the question was putting a lot more thought and effort into the discussion than the Microsoft developers who were not willing to seriously reconsider their off-the-cuff assumptions.


Oh, I didn't mean that sarcasm and derision is how the Microsoft developers interpreted the phrase. I was speaking to the notion that the question was necessarily innocent and could only be interpreted thusly.

I would say that incredulity falls within the range between "completely inoffensive" and "outright hostile", and very much toward the former side of the scale. It can be hard to distinguish from feigned incredulity, which (while still far from "sarcastic and derisive") makes its way toward the other side somewhat.


"Feigned incredulity" can be every bit as caustic as outright hostility.

It's all a matter of perception and context, of course. And though you say there's only one way to interpret it, even you describe it as a continuum.

Sadly, this is all just a sad missed opportunity.

MS could have been less defensive and more open to possible solutions. The genius programmer could have slowed his roll a bit and could have been more collaborative.


I do get the sense that the "feel" in his writing eventually becomes more like "what are you guys smoking, this should be simple!"

It's not just "Am I missing something?"

It's:

"Am I missing something? Why is all this stuff with "runs of characters" happening at all? Why would you ever need to separate the background from the foreground for performance reasons? It really seems like most of the code in the parser/renderer part of the terminal is unnecessary and just slows things down. What this code needs to do is extremely simple and it seems like it has been massively overcomplicated."

Perhaps frustrated that they don't really seem to be on the same technical page?

I tend to think these things can go both ways. I feel pointing out someone's frustration in writing tends to make things worse. Personally I would just ignore it in this case.


That exact case seem a very appropriate scenario for clarifying? Microsoft kept saying something was difficult, whilst Casey knew that it was not, so really he was being polite by first confirming that there wasn't something he'd overlooked?


There's a difference between "inherently difficult" and "difficult to update this software package". My reading of this thread is that the MS devs are saying this will take them a lot of effort to implement in this app, not that the new implementation could be simpler than the existing implementation. Asking to rearchitect the application is an involved process which would take a lot of back-and-forth to explain the tradeoffs. The new architecture can be simple, but evaluating a new architecture and moving to it are not.

There's a point at which you've moved from "fix this bug" or "evaluate this new component" to "justify the existing design" and "plan a re-architecture".


Whether or not you see his behavior as polite, I guess, is a matter of how you read people and the context of the situation. That said, he did literally admit he was being "terse". I think it was counterproductive at best and rather mean at worst.

As for whether it really is "difficult", one has to ask for whom? For someone that is intimately familiar with C++, DirectX, internationalization, the nature of production-grade terminal shell applications and all their features and requirements?

And even if it is "easy", so what? It just means Microsoft missed something and perhaps were kind of embarrassed, that's totally human, it happens. It's not so nice when this stuff is very public with harsh judgement all around.

This all rubs me the wrong way. I have found the Microsoft folks to be very helpful and generous with their attention on Github Issues. They've helped me and many others out, it has been genuinely refreshing. What this guy did might discourage participation and make folks more defensive to avoid losing face in a big public way over a mistake or silly gotch-ya.


Some people prefer to communicate with less words? This is an issue that crops up often with different cultures working on a single issue.

As for difficult, the context is very much set from it being a Github Issue on their own repo, meaning there is a certain assumption of skill.

You're cutting Microsoft a lot of slack here, and it feels like you're forgetting that out of this whole transaction MS end up with free labour and bug-fixes? They choose for the setting to be very public, and they choose to let their employee's directly reply to customers with quotes like[1]: ["I will take your terse response as accepting the summary.", "somewhat combatively", "peppering your comments with clauses like", "impugning the reader."]. All of which are corporate-passive-aggression and (in my mind) are vastly more antagonistic than Casey ever was?

1. https://github.com/microsoft/terminal/issues/10362


One non-sequiter deserves another. Just call his mother a cunt and move on.


Casey is in fact perpetually annoyed with and disdainful of microsoft. Anyone who is familiar with him knows this.

He's been like this for years, and that's fine when you are hanging out with you buddies over a beer, but now Casey is a public figure.

Being a public figure means you are not 'every competent developer'. The reason this was made so public wasn't ms employees, it was Casey's followers.

The sequence of events he started here ended with his fans and friends on discord feeling justified (because Casey, not them, was right on a technical level) brigading volunteers and microsoft employees alike until at least one of them quit open source.

A truly ugly conclusion that could have been avoided with a more circumspect approach.


The problem wasn't that the Microsoft devs were wrong technically. The problem was that the tone of the Microsoft developers got much worse than Cassey's tone, they should have just closed the bug rather than ridiculing him at the end. If they did that the issue wouldn't have been a big deal.


I've found people sometimes take a neutral tone, especially from someone (me for example!) who is sometimes more than a bit openly opinionated, as being passive aggressive (or passive condescending if that is a thing). Perhaps that is what has happened in this case?


For those curious, what was the outcome of this closed issue? Did Casey make a working terminal on Windows outside of a text renderer? Did Microsoft incorporate his feedback?

My worry is that Casey did this technical performance for the benefit of his followers, and nothing of value was gained, except of course Casey's growing fame.


Well given how absurdly big the difference is, and the main thing he did was render on demand instead of 7000fps I think he has a good reason to be condescending and they totally deserve it for wasting millions of people's time with this shit.


See also: the blinking cursor in Visual Studio Code.

Here's a thread on it with other examples: https://news.ycombinator.com/item?id=13940014

They fixed it, but it was a sign of the times. Everything we've used over the decades had to be re-implemented for the web and stuff like Electron, and the people doing the implementing use such powerful machines that they don't even notice a simple blinking cursor devouring CPU or think about its impact on normal computers.


This! Developers at MS (edit: and elsewhere) should be forced to use their brainwork on low-end machines at least two days a week.

Or not - regardless of what MS employee claimed, Linux terminals performance is more than adequate.

Edit: I am speaking of Linux, not WSL, of course.


Yes, the open source volunteers and random employees deserve it. They are responsible for all of microsoft's many sins, and we should find them online and tell them they are trash tier developers until they learn their lesson, right?

Ok, sarcasm off. This attitude is utterly toxic. People who are ignorant of how fast their software could be do not deserve abuse from strangers.


> People who are ignorant of how fast their software could be do not deserve abuse from strangers.

That's not the only valid way to frame the situation. At some level, professional software developers have a responsibility to be somewhat aware of the harm their currently-shipping code is doing.


Taking responsibility (which the developers later did by the way, even in this thread) and enduring abuse (which is also well documented here and elsewhere) should not be put on the same level.

More broadly, I'd much rather endure a slightly slow terminal made by developers acting in the open and in (mostly) good faith than the intentionally malicious software produced by actual bad actors within Google, Facebook, Microsoft et all..


"Abusive" is probably the the best one-word description of the way Microsoft and its software interacts with users. But I thing we'd agree it's a bit of a stretch to apply that to the case of a slightly slow terminal. However, it is absolutely fair to call it abusive when Microsoft tries to deny their problems or lie to their users that those problems are not Microsoft's fault and are something the users must simply put up with.

It's also important to keep in mind the vast asymmetry here. When Microsoft deploys problematic software, even a relatively minor problem will be responsible for many man-hours of frustration and wasted time. Far more man-hours than are ruined when a few developers have bad things said about them online. One doesn't excuse the other, but you can't ignore one of the harms simply because it's more diffuse.


The person that quit the project (and possibly the internet at large) wasn't a microsoft employee.

In my mind, there are two asymmetries.

* Microsoft v. Users

and

* Casey's network v. a 3-4 man open source team within microsoft

I don't disagree that the former is abusive.

However, it's my contention that this incident is primarily about the latter.

Casey, rightly, already had some pent up rage about the former asymmetry as well.

But it was a human manager/dev? within that small team, not Microsoft writ large, that got defensive about the software he was responsible for.

I believe I'd feel embarrassed and defensive too if something I'd worked on turned out to be flawed in a painfully obvious way. I can understand avoiding the grim truth by denying that the problem has truly been solved by ~700 lines of C.

Something else that I'll note here is that the vast majority of "Your software is too slow, Here's how to fix it, I could do it in a weekend, and by the way this whole problem space is actually super simple." tickets do not end up realizing software speedups. Without proper context, they just sounds patronizing, making the argument easier to dismiss.


That said, there's only so much patience one can have...


condescending: having or showing a feeling of patronizing superiority.

In this case he also demonstrated his superiority with working code.

Better to learn from it than to pout about it.


However, his experience, in games and game development tools AFAIK, might not be fully applicable to the development of mainstream commercial software that has to try to be all things to all people, including considerations like internationalization, accessibility, and backward compatibility. The performance difference that he demonstrated between Windows Terminal and refterm is certainly dramatic, but I wouldn't be surprised if there's something he's overlooking.


When I saw this mentioned on HN I immediately knew this kind of comment would be there because something along the lines and I am paraphrasing "its probably fast because its not enterprise enough" was repeated in every place the refterm was shared by different people even multiple times even after showing all the proof in the world that its in fact the opposite they almost refused to believe that software can be that much better than its standard today even to the point of bringing up arguments like 16fps terminal is better than 7500 one because so many fps would probably consume to much resources. Before I found Casey's tone criticizing bad software off putting but now I understand that after many years of such arguments it takes toll on you.


Seconding. It takes doing some low-level gamedev[0] stuff, or using software written by people like Casey, to realize just how fast software can be. There's an art to it, and it hits diminishing returns with complex software, but the baseline of popular software is so low it doesn't take much more than a pinch of care to beat it by an order of magnitude.

(Cue in the "but why bother, my app is IO bound anyway" counterarguments. That happens, but people too often forget you can design software to avoid or minimize the time spent waiting on IO. And I don't mean just "use async" - I mean design its whole structure and UX alike to manage waits better.)

--

[0] - Or HFT, or browser engine development, few other performance-mindful areas of software.


I feel obliged to point out the destructive power of Knuth's statement, "Premature optimization is the root of all evil."

I have encountered far too many people who interpret that to mean, "thou shall not even even consider performance until a user, PM or executive complains about it."


The irony is that the very paragraph in which Knuth made that statement (and the paper, and Knuth's programming style in general) is very much pro-optimization. He used that statement in the sense of "Sure, I agree with those who say that blind optimization everywhere is bad, but where it matters…".

Here's the quote in context:

> There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.…

( https://pic.plover.com/knuth-GOTO.pdf , http://www.kohala.com/start/papers.others/knuth.dec74.html For fun, see also the thread around https://twitter.com/pervognsen/status/1252736617510350848)

And from the same paper, an explicit statement of his attitude:

> The improvement in speed from Example 2 to Example 2a [which introduces a couple of goto statements] is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, …


Knuth's statement was basically "use a profiler and optimize the hot path instead of trying to optimize with your intuition", which is great advice. Most people heard "don't optimize at all". Something that you can derive from that advice is "have a hot path to optimize". I've seen a few programs that aren't trivial to optimize because the work is distributed everywhere.


I'm what most people call a FPGA Engineer though I work all the way from boards/silicon to cloud applications. The number of times I've been asked to consult on something in the software world on performance, where the answer to how to do it write was me telling them "rm -rf $PROBLEMATIC_CODE" and then go rewrite it with a good algorithm, is way too damn high. Also, the number of times someone asked me to accelerate something on an FPGA only for me to go implement it run on a GPU in about 2-3 days using SYCL + OpenCL is insane. Sure, I could get another 2x improvement... or we can accept the 1,000x improvement I just gave you at a much lower price.


Which of course, never really happens, because PM’s and execs always want more features, and performance is never a feature for them until it becomes so noticeably bad that they begrudgingly admit they should do the minimum to make users stop complaining.


Agreed, as a young performance oriented coder I've been often looked down by people who used Knuth almost god-like authority to dress up all sorts of awful engineering.

And of course most people don't know the full quote and they don't care about what Knuth really meant at the time.


>> I've been often looked down by people who used Knuth almost god-like authority to dress up all sorts of awful engineering.

Quick quips don't get to trump awful engineering. Just say call Knuth a boomer and point to the awful aspects of actual code. No disrespect to Knuth, just dismiss him as easily as people use him to dismiss real problems.


I feel like that quote spoke to a particular time. Nowadays I'd point at premature abstraction as the fount of evil.


> I feel obliged to point out the destructive power of Knuth's statement, "Premature optimization is the root of all evil."

Except that line was written in a book (Volume 1: Art of Computer Programming) that was entirely written from the ground up in Assembly language.

Its been a while since I read the quote in context. But IIRC: it was the question about saving 1 instruction between a for-loop that counts up vs a for-loop that counts down.

"Premature optimization is the root of all evil" if you're deciding to save 1-instruction to count from "last-number to 0", taking advantage of the jnz instruction common in assembly languages, rather than "0 to last-number". (that is: for(int i=0; i<size; i++) vs for(int i=size-1; i>=0; i--). The latter is slightly more efficient).

Especially because "last-number to 0" is a bit more difficult to think about and prove correct in a number of cases.


> Except that line was written in a book

I recall it being from his response to the debates over GOTO, and some googling seems to agree.

Not that that takes away from your overall point.


In the office today with my copy of Literate Programming (which contains the essay in question) I can confirm that the sentence does appear in "Structured Programming with goto Statements" (it appears on page 28 of my copy). Here it is in a general context, not pertaining to a single particular example.

In support of your overall point, though, having just said "[w]e should forget about small efficiencies, about 97% of the time", the next paragraph opens: "Yet we should not pass up our opportunities in that critical 3%."


It's hilarious to me that people quote the person that wrote TAOCP to justify not thinking about performance at all.


I'm not an experienced programmer but if I took all these maxims seriously...

If I don't think about performance and other critical things before committing to a design I know that in the end I will have to rewrite everything or deliver awful software. Being lazy and perfectionist those are two things I really want to avoid.


... the best usage of this phrase i've encountered is using it to shut down a requirements discussion


We should forget about small efficiencies, say about 97% of the time.

Yet we should not pass up our opportunities in that critical 3%.


I find it striking that a decent modern laptop would have been a supercomputer 20 years ago, when people used Office 97 that was feature complete already IMO. I can't help this constant cognitive dissonance with modern software; do we really need supercomputers to move Windows out of the box?


We need some extra processing power to support larger screens and refresh rates. Arguably, security benefits of managed code / sandboxing are worth it - but the runtimes seem to be pretty-well optimized. Other than that, I don't see anything reasonable to justify the low performance of most software.


   "support larger screens and refresh rates"
Uh, yah 4k, etc but most of the modern machines are still 1920x1080@60hz. Which is only 8% larger than 1600x1200 which wasn't an uncommon resolution in the late 1990's, usually running at 75Hz or better over analog vga cables. So its actually _LESS_ bandwidth, which is why many of us cried about the decade+ of regression in resolution/refresh brought on by the LCD manufactures deciding computer monitors weren't worthy of being anything but overpriced TV screens. Its still ongoing, but at least there are some alternatives now.

It is possible to get office97 (or for that matter 2003, which is one of the last non sucky versions) and run it on a modern machine. It does basically everything instantly, including starting. So I don't really think resolution is the problem.

PS, I've had multiple monitors since the late 80's too, in various forms, in the late 1990's driving multiple large CRTs at high resolution from secondary PCI graphics cards, until they started coming with multiple ports (thanks matrox!) for reasonable prices.


I'd imagine software is bloated and grown until is it still-just-about usable on modern hardware. Making it faster there is probably seen as premature optimisation.

I'd imagine perhaps this is how product teams are assessed - is the component just-about fast enough, and does it have lots of features. So long as MS Office is the most feature-rich software package, enterprise will buy nothing else, slow or not.


It doesn't even need to be the most feature-rich any more. Microsoft has figured out that the key is corporate licensing.

Is Teams better than Zoom? No, but my last employer ditched Zoom because Teams was already included in their enterprise license package and they didn't want to pay twice for the same functionality.


It really is feature-complete. And I still use it for writing! Word 97 beats anything else I've tried in both polish and performance.


I think there's a story in here that most are missing, but your comment is closest to touching on. This was not a performance problem. This was a fundamental problem that surfaced as a performance issue.

The tech stack at use in the Windows Terminal project is new code bolted onto old code, and no one on the existing team knows how that old code works. No one understands what it's doing. No one knows when the things that old code needed to do was still needed.

It took someone like Casey who knew gamedev to know instinctually that all of that stuff was junk and you could rewrite it in a weekend. The Microsoft devs, if they wanted to dive into the issue, would be forced to Chesteron's Fence every single line of code. It WOULD have taken them years.

We've always recommended that programmers know the code one and possibly two layers below them. This recommendation failed here, it failed during the GTA loading times scandal. It has failed millions of times and the ramifications of that failing is causing chaos of performance issues.

I'm come to realize that much of the problems that we have gotten ourselves into is based on what I call feedback bandwidth. If you are an expert, as Casey is, you have infinite bandwidth, and you are only limited by your ability to experiment. If your ability to change is a couple seconds, you will be able to create projects that are fundamentally impossible without that feedback.

If you need to discuss something with someone else, that bandwidth drops like a stone. If you need a team of experts, all IMing each-other 1 on 1, you might as well give up. 2 week Agile sprints are much better than months to years long waterfall, but we still have so much to learn. If you only know if the sprint is a success after everyone comes together, you are doomed. The people iterating dozens of times every hour will eat your shorts.

I'm not saying that only a single developer should work on entire projects. But what I am saying is that when you have a Quarterback and Wide Receiver that are on the same page, talking at the same abstraction level, sometimes all it takes is one turn, one bit of information, to know exactly what the other is about to do. They can react together.

Simple is not easy. Matching essential complexity might very well be impossible. Communication will never be perfect. But you have to give it a shot.


Thanks for the “know the code one and possibly two layers below them” point, haven’t seen it written out explicitly before, but it sure puts into perspective why I consider some people much better programmers than others!


I started off programming doing web development working on an community run asynchronous game where we needed to optimize everything to run in minimal time and power to save on cost and annoyance. It was a great project to work on as a high schooler.

Then in college, I studied ECE and worked in a physics lab where everything needed to run fast enough to read out ADCs as quickly as allowed by the datasheet.

Then I moved to defense doing FPGA and SW work (and I moonlighted in another group consulting internally on verifcation for ASICs). Again, everything was tightly controlled. On a PCI-e transfer, we were allowed 5 us of maximum overhead. The rest of the time could only be used for streaming data to and from a device. So if you needed to do computation with the data, you needed to do it in flight and every data structure had to be perfectly optimized. Weirdly, once you have data structures that are optimized for your hardware, the algorithms kind of just fall into place. After that, I worked on sensor fusion and video applications for about a year where our data rates for a single card were measured in TB/s. Needless to say, efficiency was the name of the game.

After that, I moved to HFT. And weirdly, outside of the critical tick-to-trade path or microwave stuff, this industry has a lot less care around tight latencies and has crazy low data rates compared to what I'm used to working with.

So when I go look at software and stuff is slow, I'm just suffering because I know all of this can be done faster and more efficiently (I once shaved 99.5% of the run time off of a person's code with better data packing to align to cache lines, better addressing to minimize page thrashing, and automated loop unrolling into threads all in about 1 day of work). Software developers seriously need to learn to optimize proactively... or just write less shitty code to begin with.


> There's an art to it

while that's true, in this particular case with Casey's impl, it's not an art. The one thing that drastically improved performance, was caching. Literally the simplest, most obvious thing to do when you have performance problems.


That's part of the art. It's obvious to anyone who knows it and mysterious and ineffable to anyone who doesn't.

The meta-point is that corporate developers have been through the hiring machine and are supposed to know these things.

Stories like this imply that in fact they don't.


The hiring machine merely ensures that they can leetcode their way out of an interview and into the job. It doesn't care about what they're supposed to know :)


Even something like a JSON parser is often claimed to be IO bound. It almost never is because few could keep up with modern IO and some cannot keep up with old HD’s


Third'ing: the current crop of new developers have no freaking idea how much power they have at their finger tips. Us greybeard old game developers look at today's hardware and literally cream our jeans in comparison to the crap we put up with in the previous decades. People have no idea, playing with their crappy interpenetrated languages, just how much raw power we have if one is willing to learn the low level languages to access them. (Granted, Numpy and BLAS to a wonderful job for JIT languages.)


I'd say it is almost the other way around. We have so much wonderful CPU power that we can spare some for the amazing flexibility of Python etc.

Also it's not that simple. One place I worked at (scientific computation-kind of place), we'd prototype everything in Python, and production would be carefully rewritten C++. Standards were very high for prod, and we struggled to hire "good enough", modern C++, endless debates about "ooh template meta-programming or struct or bare metal pointers" kind of stuff.

3 times out of 4, the Python prototype was faster than the subsequent C++. It was because it had to be, the prototype was ran and re-ran and tweaked many times in the course of development. C++ was written once, deployed, and churned daily, without anyone caring for its speed.


Python has nice optimised libs for that, so it's not completely a surprise for that kind of application.

If you're doing generic symbol shuffling with a bit of math, Python is fast-ish to code and horribly slow to run. You can easily waste a lot of performance - and possibly cash - trying to use it for production.

Whether or not you'll save budget by writing your own optimised fast libs in some other lang is a different issue, and very application dependent.


Worth bearing in mind that Casey has a long history of unsuccessfully trying to nudge Microsoft to care about performance and the fact that he's still being constructive about it is to his credit.


I highly respect Casey but given his abrasive communication style I sometimes wonder if he is not trying to trigger people (MS devs in this case) to push him back so he can make his point.


Honestly, I felt like the ones to start with the condescending tones were the Microsoft devs who kept talking down to Casey about You Don't Understand How Hard This Is, when they also readily admitted they didn't understand the internals very well.


I don't think they're actually contradicting themselves there. They know enough about how hard text rendering is to conclude that they're better off delegating it to the team that specializes in that particular area, even though it means they have to settle for a good-enough abstraction rather than winning at a benchmark.


Enterprise deployment of a Somebody Else’s Problem field can really harm innovation,

“Any object around which an S.E.P. is applied will cease to be noticed, because any problems one may have understanding it (and therefore accepting its existence) become Somebody Else's Problem.”


Agree. Rendering text well really is hard, if you sit down and try to do it from scratch. It’s just that dealing with all of the wonderful quirks of human languages doesn’t have to make it _slow_. That’s their mistake.

And you’re right; all refterm really does is move the glyph cache out to the GPU rather than copying pixels from a glyph cache in main memory every frame.


In my experience as a former game dev who moved to enterprise apps, game dev techniques are broadly applicable and speed up enterprise apps without compromising on functionality.

Consider memory management techniques like caching layers or reference pools. Or optimizing draws for the platform's render loop. Or being familiar with profiler tools to identify hotspots. These techniques are all orthogonal to functionality. That is, applying them when you see an opportunity to will not somehow limit features.

So why aren't the enterprise apps fast, if it's so easy? I think that boils down to incentives. Enterprise apps are sales or product led and the roadmap only accommodates functionality that makes selling the software easier. Whereas in games the table stakes point you need to reach for graphics is not achievable by naively pursuing game features.

Put another way, computers and laptops are way stronger than consoles and performance is a gas. Enterprise devs are used to writing at 1 PSI or less and game devs are used to writing at 1 PSI or more.


With enterprise apps, I also have the budget to throw more computers at a problem. If it's between 2 weeks of my time, or to throwing another core at a VM, the extra core wins most of the time.


I actually have a lot of respect for old school game programmers because they have two traits that many of us who develop mainstream commercial software often lack: a) they care about performance and not in the abstract, but performance as evaluated by an actual human (latency issues in a messaging app are tolerable, a game with latency issues is simply not fun to play) and b) they can sit down without much fuss and quickly write the damn code (the ability that slowly atrophies as one works on a multi-year-old codebase where every change is a bit of a PITA). Sure, the constraints are different, but a lot of it is simply learned helplessness.


> might not be fully applicable to the development of mainstream commercial software that has to try to be all things to all people, including considerations like internationalization, accessibility, and backward compatibility.

Windows Terminal has none of that. And his refterm already has more features implemented correctly (such as proper handling of Arabic etc.) than Windows Terminal. See feature support: https://github.com/cmuratori/refterm#feature-support

Also see FAQ: https://github.com/cmuratori/refterm/blob/main/faq.md


Internationalization and accessibility are very important in game development. A lot of time is invested in this and larger studios have dedicated UI/UX teams which spend a lot of time on these issues.

The same is true of backwards compatibilty. As an example, making sure old save data is compatible with new versions is an important consideration.

Source: I'm a game programmer working mainly with graphics and performance, but I previously spent five years working on the UI team at a AAA studio.


How is it not applicable when the thing at question is rendering text and rendering is the core of game development? This argument is stupid. Do you have to be a slowpoke to develop commercial apps?


My point is that a UI that meets the needs of as many users as possible, including things like internationalization and accessibility, is much more complex than a typical game UI. That complexity drives developers to use abstractions that often make it much more difficult to optimize. And in the big picture, support for these things that add complexity is often more important than top-speed rendering.


Games are typically much better at internationalization and accessibility than developer tooling though. For example this new windows console doesn't have either, but all big games gets translated to and handles text from languages all over the world.


Video games often have an international audience and go to great lengths to support accessibility and multiplatform support, ie. supporting both tablet and desktop. It's laughable how bad many enterprise UIs are that fail to handle different locales, or issues displaying right-to-left text and assuming everyone is using an English speaking standard desktop environment, whereas many indie games manage to handle these things very well.


Games usually handle internationalization and accessibility much better than most software.

This includes audio localization (something no 'Enterprise' software has ever needed AFAIK), and multiple colour palettes for different types of colour blindness.

Sometimes video games are the only software with reasonable localizations I ever find installed in a particular computer.


Can't recall when it was the last time I played game with no internationalization support


Backward compatibility is a huge one here.

There is a newer version of component X but we can't leverage that due to dependency Y.


I found it very funny that the Hindi sample text on display, in the YouTube refterm demo, means “You can wake up someone who is sleeping, but how do wake up someone who is hell bent on pretending to sleep?”.


A bit out of topic but has anybody followed performance issues of Microsoft Flight Simulator 2020? For more than half a year it was struggling with performance because it was CPU heavy, only loading one core and etc. Barely ran on my i5 6500. Fast forward half a year, they want to release it on XBox. MS/Asobo moves a lot of computation on GPU, game starts running smoothly on the very same i5 with maximized quality settings.

You just begin to wonder how these things happen. You would think top programmers work at these companies. Why would they not start with the good concept, loading GPU first etc. Why did it take them so much time to finally do it correctly. Why waste time not doing it at the beginning.


It's pretty straightforward case of prioritization. There are always more things to do on a game project than you have people and time to do.

The game runs well enough so the people who could optimize things by rewriting them from CPU to GPU are doing other things instead. Later performance is a noticeable problem to the dev team, from customer feedback and need to ship in more resource constrained environments (VR and XBox) and that person then can do work to improve performance.

It's also handy to have a reference CPU implementation both to get your head around a problem and because debugging on the GPU is extremely painful.

To go further down the rabbit hole it could be that they were resource constrained on the GPU and couldn't shift work there until other optimizations had been made. And so on with dependencies to getting a piece of work done on a complex project.


Makes sense and then it kinda agrees with parent comment that "Microsoft developers simply didn't have performance in the vocabulary".

Yes, there is no doubt that "there are always more things to do on a game project than you have people and time to do". However how there is time to firstly make "main thread on single core" monster and then redo it according to game development 101 - make use of GPU.

It is no joke - GPU was barely loaded, while CPU choking. On a modern game released by top software company proudly presenting their branding in the very title.


> However how there is time to firstly make "main thread on single core" monster and then redo it according to game development 101 - make use of GPU.

The single threaded code was taken from Microsoft Flight Simulator X, the previous version of this game from 2006. It was not done from scratch for the new game, and it still hasn't been replaced. They've just optimized parts of it now.

Another important performance bottleneck is due to the fact that the game's UI is basically Electron. That's right, they're running a full blown browser on top of the rest of the graphis for the UI. Browsers are notoriusly slow because they support a million edge cases that aren't always needed in a game UI.

For anyone interested in learning more about Microsoft Flight Simulator 2020 optimizations I can recommend checking out the Digital Foundry interview with the game's technical director. https://www.youtube.com/watch?v=32abllGf77g


It was actually made by a third-party game development studio rather than Microsoft.

Also the assumption that the culture of the Windows Terminal team is the same as the team building a flight simulator is a bit far fetched. Large organisations typically have very specific local cultures.

Rewriting stuff from CPU to GPU also isn't 101 game development. Both because it's actually quite hard and because not all problems are amenable to GPU compute and would actually slow things down.


I work within game development. Mainly on graphics programming and performance. There's always a number of things I know would speed various systems up but that I don't have time to implement because there are so many bigger fires to put out.

Also, I have rewritten CPU implementations to run on the GPU before and it's often nontrivial. Sometimes even in seemingly straightforward cases. The architectures are very different, you only gain speed if you can properly parallelize the computations, and a lot of things which are trivial on the CPU becomes a real hassle on the GPU.


It may sound oversimplified, but IME PC games are only optimized to the point where it runs well on the development teams beefy PCs (or in the best case some artificial 'minimal requirement PC with graphics details turned to low', but this minimal requirement setup usually isn't taken very seriously by the development team).

When porting to a game console you can't simply define some random 'minimal requirement' hardware, because the game console is that hardware. So you start looking for more optimization opportunities to make the game run smoothly on the new target hardware, and some of those optimizations may also make the PC version better.


I'd like to second this and add that this is partly why I feel it's important to develop games directly for all your target platforms rather than target one or two consoles and then port the game.


Because a rule of thumb is to not focus too much on performance in the beginning of a project. Better a completed project with some performance issues, than a half product with hyper speed. The key thing with development is to find somekind of balance within all these attributes (stability, performance, loo, reusability etc) In case of FS simulator. Not sure what the motives were. Sure that they had some serious time constraints. I think they did an acceptable job there.


Agreed. As someone who spends most of their time on performance related issues, it's important to keep in mind that sometimes performance issues are strongly tied to the architecture of the program. If the data structures aren't set up to take advantage of the hardware, you'll never get the results you're hoping for. And setting up data structures is often something that needs to be thought about at the beginning of the project.


Completely agree on rule of thumb and can't doubt they had their motives. It is no way that simple.

Then again isn't it like 101 of game development?

Imagine releasing a game that looks stunning, industry agrees that it pushes the limits of modern gaming PC (hence runs poorly on old machines). Fast forward some time - "oh BTW we did it incorrectly (we had our motives), now you can run it on old machine just fine, no need of buying i9 or anything".


Because it's never that obvious, it's not really 101 of gamedev to move everything to the GPU.

You know your target specs but where the bottlenecks are and how to solve them will be constantly shifting. At some point the artists might push your lighting or maybe it's physics now, maybe it's IO or maybe networking. Which parts do you move to the GPU?

Also a GPU is not a magic bullet, it's great for parallel computation but not all problems can be solved like that. It's also painful to move memory between the CPU and the GPU and it's a limited resource can't have everything there.


True. Performance is not issue if no one is playing it.


I'll reiterate a rant about Flight Simulator 2020 here because it's on-topic.

It was called "download simulator" by some, because even the initial installation phase was poorly optimised.

But merely calling it "poorly optimised" isn't really sufficient to get the point across.

It wasn't "poor", or "suboptimal". It was literally as bad as possible without deliberately trying to add useless slowdown code.

The best equivalent I can use is Trivial FTP (TFPT). It's used only in the most resource-constrained environments where even buffering a few kilobytes is out of the question. Embedded microcontrollers in NIC boot ROMs, that kind of thing. It's literally maximally slow. It ping-pongs for every block, and it uses small blocks by default. If you do anything to a network protocol at all, it's a step up from TFTP. (Just adding a few packets worth of buffering and windowing dramatically speeds it up, and this enhancement thankfully did make it into a recent update of the standard.)

People get bogged down in these discussions around which alternate solution is more optimal. They're typically arguing over which part of a Pareto frontier they think is most applicable. But TFTP and Microsoft FS2020 aren't on the Pareto frontier. They're in the exact corner of the diagram, where there is no curve. They're at a singularity: the maximally suboptimal point (0,0).

This line of thinking is similar to the "Toward a Science of Morality" by the famous atheist Sam Harris. He starts with a definition of "maximum badness", and defines "good" as the direction away from it in the solution space. Theists and atheists don't necessarily have to agree with the specific high-dimensional solution vector, but they have to agree that that there is an origin, otherwise there's no meaningful discussion possible.

Microsoft Terminal wasn't at (0,0) but it was close. Doing hilariously trivial "optimisations" would allow you to move very much further in the solution space towards the frontier.

The Microsoft Terminal developers (mistakenly) assumed that they were already at the Pareto frontier, and that the people that opened the Github Issue were asking them to move the frontier. That does usually require research!


I though that it looks more like 'no one touches it at all after function part finished'. It is insane that someone still do decompress and downloading in the same thread and blocks download during unzip the resource in 2021. Even 2000's bash programmer knows you had better don't do them in order or it will be slow...


It also downloads hundreds of thousands of tiny files over a high-latency connection.

At least initially, it didn't use the XBox CDN either.


The damn downloader in MSFS is the most infuriating thing. In Canada on either of the main ISPs I too out at 40ish Mbps whereas Steam and anything else really does close to the full 500Mbps. It also only downloads sequentially, pausing to decrypt each tiny file. And the updates are huge so it takes a good long while to download 2+GB.


Sometimes you don't know you have a performance problem until you have something to compare it to.

Microsoft's greatest technical own goal of the 2010s was WSL 2.

The original WSL was great in most respects (authentic to how Windows works; just as Windows NT has a "Windows 95" personality, Windows NT can have a "Linux" personality) but had the problem that filesystem access went through the Windows filesystem interface.

The Windows filesystem interface is a lot slower for metadata operations (e.g. small files) than the Linux filesystem interface and is unreformable because the problem is the design of the internal API and the model for security checking.

Nobody really complained that metadata operations in Windows was slow, they just worked around it. Some people though were doing complex build procedures inside WSL (build a Linux Kernel) and it was clear then there was a performance problem relative to Linux.

For whatever reason, Microsoft decided this was unacceptable, so they came out with WSL 2 which got them solidly into Kryptonite territory. They took something which third party vendors could do perfectly well (install Ubuntu in a VM) and screwed it up like only Microsoft can (attempt to install it through the Windows Store, closely couple it to Windows so it almost works, depend on legitimacy based on "it's from Microsoft" as opposed to "it works", ...)

Had Microsoft just accepted that metadata operations were a little bit slow, most WSL users would have accepted it, the ones who couldn't would run Ubuntu in a VM.


WSL2 worked for me in a way that WSL1 did not and it had to do with build times while doing tutorial projects. I am not an expert, but my own experience was that it was a massive improvement.


Parent is not refuting that WSL2 performed better than WSL1, they're arguing that a reasonable response to WSL1 giving you slow build times might have simply been to use a VM instead.

Microsoft being Microsoft, they didn't want people like you to hop to VMware or VirtualBox and use a full, branded instance of Fedora or Ubuntu, because then you would realize that the next hop (moving to Linux entirely) was actually quite reasonable. So they threw away WSL1 and built WSL2. Obviously WSL2 worked better for you than WSL1, but you also did exactly what Microsoft wanted you to do, which is to their benefit, and not necessarily to yours.


Additionally, the problems that held back WSL1 performance still exist, and WSL1 wasn't their only victim. So Microsoft has abandoned WSL1, but they still need to address those underlying problems. Only now, if they do successfully deal with those issues (most likely as part of their DirectStorage effort and cloning io_uring), we're unlikely to see WSL1 updated to benefit—even though that might result in WSL1 becoming a better user experience than WSL2.


What's wrong with the performance? I don't notice any performance issues in Windows terminal.


Do you use colored text tags? The performance issues only happened when that was on.


Yes


I just launched "dir"

https://i.imgur.com/lkbOR3i.png

can't even print properly the decimal separator.

maybe it wasn't that easy.


Now what? What now? How about now? Just built it from source.

https://imgur.com/dGI5W2S


Oh and you can change font

Here's with Courier New: https://imgur.com/t88wwTx

EDIT: And here's with JetBrains Mono: https://imgur.com/HYEYtGb

EDIT: Do I need to make a font parade?

EDIT: Consolas! https://imgur.com/klPufrd Where can we go without the classic? IBM Plex Mono https://imgur.com/e5tQ0NB


The font, m8, the font. Use monospcae font and it works like a charm.


Does the GP's chosen font work correctly in Windows Terminal, though? If so, then that proves that there is indeed more to a fully functional terminal renderer than refterm covers.


Refterm is a not a full-featured terminal in terms of configurability, but it has all the features needed for rendering. Configuration like: choosing fonts, choosing colors, tabs, whatever, it's misc. features, which are unrelated to rendering. The case here is about rendering and it's exactly what is shite in every terminal emulator. I don't understand why everyone is arguing about it anyway. Refterm provides a fix, Windows Terminal should simply implement it. Is this dignity or what? Are you not engineers? Should you not prioritise software quality above everything else?

EDIT: Remove argument about ease of development, because refterm is easy.

This bugs me every time. "Wow, this software works so good, but we are not gonna make our software like that, no, we'll stick to our shite implementation."


The fact that Casey Muratori's proposed approach requires the terminal to reimplement the process of correctly mapping characters to glyphs - including stuff like fallbacks to other fonts - is a huge part of the argument for why it's much harder to implement and more complicated than he claims. If it really doesn't do that right for something as simple as a decimal seperator for the font some random HN commenter happened to use, that does tend to suggest the Microsoft employees are in the right here.


The "font some random HN commenter happened to use" is some f****** proportional Calibri. I want to see someone use it in any terminal emulator. Refterm defaults to Cascadia Code, but, fair enough, it doesn't have fallback yet.

Its' description says also: "Reference monospace terminal renderer". "monospace" is there for a reason.

It's worth mentioning though, that Windows Terminal also defaults to Cascadia Code and Cascadia Code was installed automatically on my machine, so it's de-facto the new monospace standard font on Windows starting from 10.


> some f*** proportional Calibri Refterm defaults to Cascadia Code

cascadia is as borken as the other font, so what now?

https://i.imgur.com/WeV8Ror.png

maybe writing a unicode rendering isn't that easy? maybe drop the attitude?


>maybe drop the attitude?

You came here with an attitude.

> I just launched "dir" https://i.imgur.com/lkbOR3i.png can't even print properly the decimal separator. maybe it wasn't that easy.


Turned out it defaults to Cascadia Mono, my bad. Still your argument is wrong, because, like it doesn't work on your machine, on mine it does.


which is exactly the point, the amount of shortcut taken to convert code point to fast graphics make it just a nice hack thrown together and devs were right on preferring the slow but correct approach.


That is a terminal bug and not a rendering bug though, since the problem was that the terminal didn't properly fetch your user settings here. Feeding the same character into the other rendering would cause the same issue.

Nobody said he made a fully functional better terminal, just that the terminal rendering was better and functional. Doing everything needed for a fully functional terminal is a lot of work, but doing everything needed for terminal rendering isn't all that much work.


> Nobody said he made a fully functional better terminal

> The "complaining developer" produced a proof of concept in just two weekends that notably had more features[1] and was more correct than the Windows Terminal!

easily falsifiable bullshit found to be false.

https://news.ycombinator.com/item?id=28744084


I have to say though, the responder to the first comment did a bad job at conveying what exactly refterm is... Apparently people falsely think that refterm is a terminal emulator and use it interchangeably with terminal and terminal renderer, while it isn't.

If you were to open refterm once, you'd see a text that explicitly states "DO NOT USE IT AS A TERMNIAL EMULATOR", or something like that (can't open it now to copy exact).


You can’t use a non-monospace font on a tiled space. Like that literally makes no sense. Of course it won’t look right. This is like asking why you can’t use `out` parameters on an inherent async function.


Whether the font is monospace or not isn't really the problem - that causes some aesthetically ugly spacing, but that's to be expected and it's still readable. The big issue is that the code has completely failed to find a glyph for one of the characters used in something as commonplace as a directory listing from the dir command and people expect better than this from font rendering in modern applications.


If you copy and paste, the characters are there. The problem is that the glyph simply is off the tile.


Then again, refterm doesn't f*ck it up. It still renders proportional font; just aligned to tiles. It is essentially doing the same as for emoji.


As far as I understood from the video about refterm the speedup is mostly due to not rendering literally every frame in sequence (after all, who needs that), which is what windows terminal seems to be doing.

That seems like it would be unaffected by correcting font rendering.


> That seems like it would be unaffected by correcting font rendering.

It is.

"... extremely slow Unicode parsing with Uniscribe and extremely slow glyph generation with DirectWrite..."

Glyph generation is about rasterising text, because you can't just feed it the font file.


This is some schoolyard level stuff right here. The GP isn't using a monospaced font. Who, in the history of terminal emulators, has wanted to use a non-monospaced font in their terminal?


This is exactly the sort of "we can just skip that feature to make it faster!" edge case that I was talking about in my post.


But it isn't an edge case! It's not an edge case, if it isn't a use case!

This is an edge case as much as building a rasterizer directly into the terminal is an "edge case".


> But it isn't an edge case! It's not an edge case, if it isn't a use case!

The fact that a random commenter on HN used a non-monospaced font with refterm actually makes it a use case.

I do, however, agree that it is an edge case with a very low probability.


Because it's such an improbable edge case, it seems like it's not relevant to the more general discussion of "does refterm's speed and features actually show that the rendering problem is far easier than the Microsoft developers made it out to be".

The Microsoft terminal doesn't render monospaced fonts, the overwhelmingly common case, nearly as fast as refterm. If rendering variable-width fonts is somehow intrinsically insanely expensive for some reason (which I haven't seen anyone provide good evidence for), then a good implementation would still just take refterm's fast monospaced rendering implementation and use it for monospaced fonts, and a slower implementation for variable-width fonts.

That is - refterm's non-existent variable-width font rendering capabilities do not excuse the Windows terminal's abysmal fixed-width font rendering capabilities.


Agreed. It doesn't seem like it is relevant. My comment was more addressing that it _is_ an edge-case, albeit a very unlikely one.


Wait, what? Your edge case is something that no one would ever (should ever?) do? Are you going to complain about it not rendering scripts fonts correctly either?

Also, it's worth noting that this isn't a compelling argument in the first place because the windows terminal doesn't even come close to rendering readable Arabic, it fucks up emoji rendering, etc – all cases that Casey was able to basically solve after two weekends of working on this.


In my 25 years of software development I've found that I'm rarely able to enumerate what an application should or shouldn't do on my own. Unless the app is extremely simple and has very few options it's incredibly unusual for any individual to understand everything about it. While I don't have a use case where I'd want to use a variable space font in a terminal, that doesn't mean such a case doesn't exist for anyone. Maybe some people want that feature.

Windows Terminal, for some reason, gives users the option to change their font to one that isn't monospaced, so I'd argue that it should render them correctly if the user chooses to do that.


Would you actually...?



doesn't work even with the proper font https://i.imgur.com/WeV8Ror.png


You're not arguing in good faith.

Casey threw something together in a matter of days that had 150% of the features of the Windows Terminal renderer, but none of the bug fixing that goes into shipping a production piece of software.

That screenshot you keep parading around is a small issue with a quick fix. It's not like Casey's approach is inherently unable to deal with punctuation!

You don't discard the entire content of a scientific journal paper because of a typo.

"Sorry Mr Darwin, I'm sure you believe that your theory is very interesting, but you see here on page 34? You wrote 'punctuted'. I think you meant 'punctuated'. Submission rejected!"


In this case, the bug fixing is probably the lion's share of the work though - there's a huge amount of subtle edge cases involved in rendering text, and the Microsoft employees almost certainly know this. And the example that broke it isn't even something particularly obscure. We're literally talking about the output of the dir command, one of the first things someone is likely to do with a terminal window, not displaying correctly. He basically did the easy part of the work and lambasted some Microsoft employees as idiots because they thought it was more complex than that.


In Casey's defense (I'm ambivalent on this one), while the dir command itself isn't obscure, one could argue that using a no-op Unicode character as the digit group separator is an obscure case, at least for an American programmer. But I think your overall point still stands.


You've lost the original point: everyone was pretending this refterm was ready to replace the terminal app, criticizing microsoft for taking the slow but sure approach:

> The "complaining developer" produced a proof of concept in just two weekends that notably had more features[1] and was more correct than the Windows Terminal!

But now apparently pointing out that "MS was right not to want to take shortcut in unicode rendering" morphed into "criticizing in bad faith refterm for not being production ready"

Who's not arguing in good faith here?


>refterm was ready to replace the terminal app

Considering Casey himself puts front and center the disclaimer that this is solely intended to be a reference and goes into as much detail in his videos I don't know where you got this from. I don't think anyone is under the illusion that this is could replace the actual terminal. It's just meant to show that there's a minimum level of performance to be expected for not a huge amount of effort (a couple weekend's worth) and there is no excuse for less.


> everyone was pretending this refterm was ready to replace the terminal app

Who said that? Refterm isn't a fully functional terminal, it is just a terminal renderer bundled with a toy terminal.


I haven't changed anything, just downloaded and launched, that's the result. if the term only works with one font why is the software picking a random one from the system conf?


It looks like refterm is hard-coded to use Cascadia Mono, which isn't included in-box with Windows 10. So I don't know what happens if you don't have that font. If that's the only issue, then I think we can let that one go, as refterm is clearly only a proof of concept, and one-time logic for choosing the correct font at startup would presumably have no effect on rendering speed.


https://i.imgur.com/WeV8Ror.png

no, doesn't work with cascadia either.

so what now?


I suspect an i18n issue. What locale are you using, and what is the decimal separator character supposed to be in your locale?



It looks like your "Digit grouping symbol" field is empty. I'm sure that's standard in some locales, though not for US English. I don't know how to make that field empty; when I try, Windows says it's invalid. So I wonder if your locale sets that separator to some kind of Unicode character that, in a proper renderer, is equivalent to no character at all. If that's the case, then I'm guessing refterm could handle that character as easily as it handles VT escape codes. But this does lend some weight to the position that Casey was oversimplifying things a bit.


well, whatever is different in your settings, mine renders normal https://imgur.com/dGI5W2S

EDIT: I am sorry for the attitude, changed "wrong" to "different"


Their settings aren't wrong, just different, likely because of differing standards for digit grouping across locales. So this is a case that refterm clearly doesn't support. This case by itself doesn't invalidate refterm's approach to rendering, but I can see why the team at Microsoft, knowing that there are many such cases, would favor abstraction over the absolute best possible speed.


which is exactly the point.


A pretty one sided view. I use Windows terminal because it supports multiple tabs - multiple cmds and some WSL bashes.

I don't care at all if this or that terminal uses a bit more RAM or is a few milliseconds faster.


Then they should have said so on the issue "We don't value performance and wont spend any resources to fix this", rather than do the dance of giving bullshit reasons for not doing it.

Anyway, resource usage matters quite a lot, if your terminal uses CPU like an AAA game running in the background you will notice fan noises, degraded performance and potentially crashes due to overheating everywhere else in the computer.


> I don't care at all if this or that terminal uses a bit more RAM or is a few milliseconds faster

Did you watch the video? The performance difference is huge! 0.7 seconds vs 3.5 minutes.


> The "complaining developer" produced a proof of concept in just two weekends...

That developer also was rather brusque in the github issue and could use a bit more humility and emotional intelligence. Which, by the way, isn't on the OP blog post's chart of a "programmers lifecycle". The same could be said of the MS side.

Instead of both sides asserting (or "proving") that they're "right" could they not have collaborated to put together an improvement in Windows Terminal? Wouldn't that have been better for everyone?

FWIW, I do use windows terminal and it's "fine". Much better than the old one (conhost?).


> could they not have collaborated to put together an improvement in Windows Terminal?

My experience with people that want to collaborate instead of just recognizing and following good advice is that you spend a tremendous amount of effort just to convince them to get their ass moving, then find out they were not capable of solving the problem in the first place, and it’s frankly just not worth it.

Much more fun to just reimplement the thing and then say “You were saying?”


Haha I just saw the youtube video with the developer demoing his project and the text in his terminal literally reads in Hindi "you can wake up someone who is asleep, but how do you wake up someone who is just closing his eyes and pretending to be asleep?"

The developer surely was having a tonne of fun at the expense of Microsoft. Perhaps a little too much fun imo.


> Much more fun to just reimplement the thing and then say “You were saying?”

The thing is NO ONE likes to lose face. He could have still done what he did (and enjoy his "victory lap") but in a spirit of collaboration.

To be fair, MS folks set themselves up for this but the smart-alec could have handled it with more class and generosity.


This was purely self-inflicted.


It’s often easier to put together something from scratch, if you’re trying to prove a point, than it is to fix a fundamentally broken architecture.


That sounds like you've never seen performance of a heavily worked-on subsystem increase by 10x because one guy who was good at that kind of stuff spent a day or two focused on the problem.

I've seen that happen at least 10 times over my career, including at very big companies with very bright people writing the code. I've been that guy. There are always these sorts of opportunities in all but the most heavily optimized codebases, most teams either a) just don't have the right people to find them, or b) have too much other shit that's on fire to let anyone except a newbie look at stuff like performance.


More generally, in my experience performance isn't looked at because it's "good enough" from a product point of view.

"Yes it's kinda slow, but not enough so customers leave so who cares." Performance only becomes a priority when it's so bad customers complain loudly about it and churn because of it.


There’s a bit of incentive misalignment when commercial software performance is concerned. If we presume customers with tighter budgets tend to be more vocal and require more support, and customers on slower machines are often customers on tighter budgets, the business as a whole might actually not mind those customers leaving as it’d require less support resources spent on customers who are more difficult to upsell to.

Meanwhile, the majority of customers with faster machines are not sensitive enough to feel or bother about the avoidable lag.


Though we're now in a situation where lots of software (see: Word and Excel) is painfully slow even on high-end desktop hardware.


That is probably why they have this law called wirths law about the wintel ecosystem. What Andy giveth, Bill taketh away.

Or Gates's law "The speed of software halves every 18 months"


There's also sometimes the incentive to slow things down because if it is too fast, the client will perceive that he paid too much money for an operation that takes no time, i.e. it doesn't exists seems unimportant.


It would be a shame for a truly artistically designed busy-wheel if it didn't get turn once or twice, regardless of the time it was actually needed!


I've made such optimizations and others made them in my code, so:

c) slow code happens to everyone, sometimes you need fresh pair of eyes.


Absolutely: what I should have said is that I've been the one to cause performance problems, I've been the one to solve them, I've been the manager who refused to allocate time to them because they were not important enough, and I've been the product owner who made the call to spend eng hours on them because they were. There are many systemic reasons why this stuff does not get fixed and it's not always "they code like crap", though sometimes that is a correct assessment.

But show me a codebase that doesn't have at least a factor of 2 improvement somewhere and does not serve at least 100 million users (at which point any perf gain is worthwhile), and I'll show you a team that is being mismanaged by someone who cares more about tech than user experience.


"Need" especially, because often it's just that those fresh eyes don't have any of the political history, so doesn't have any fallout from cutting swaths through other people's code that would be a problem for someone experienced in the organisation.


I’ve seen it at least as many times, too. Most of the time, the optimization is pretty obvious. Adding an index to a query, or using basic dynamic programming techniques, or changing data structures to optimize a loop’s lookups.

I can’t think of a counter example, actually (where a brutally slow system I was working on wasn’t fairly easily optimized into adequate performance).


It is nice when a program can be significantly sped up by a local change like that but this is not always the case.

To go truly fast, you need to unleash the full potential of the hardware and doing it can require re-architecting the system from the ground up. For example, both postgres and clickhouse can do `select sum(field1) from table group by field2`, but clickhouse will be 100x faster and no amount of microoptimizations in postgres will change that.


No argument from me. I’m just pointing out that it’s wrong to assert that programmers are general incorrect when they say something can be optimized. Many times in my career, optimizations were trivial to implement, and the impact was massive. There have been a few times when the optimization was impossible without a major rearchitecture, but those times were rare.


Yah, was going to say something like this. I've fixed these problems a few times, and I don't really think any of them were particularly hard. That is because if your the first person to look at something with an eye to performance there is almost always some low hanging fruit that will gain a lot of perf. Being the 10th person to look at it, or attack something that is widely viewed as algorithmically at its limit OTOH is a different problem.

I'm not even sure it takes an "experienced" engineer, one of my first linux patches was simply to remove a goto, which dropped an exponential factor from something, and changed it from "slow" to imperceptible.


I don't know about you, but I was really laughing out loud reading that GitHub conversation.

GPUs: able to render millions of triangles with complex geometrical transformations and non-trivial per-pixel programs in real time

MS engineers: drawing colored text is SLOW, what do you expect

P.S. And yes, I know, text rendering is a non-trivial problem. But it is a largely solved problem. We have text editors that can render huge files with real-time syntax highlighting, browsers that can quickly layout much more complex text, and, obviously Linux and Mac terminal emulators that somehow have no issue whatsoever rendering large amount of colored text.


To be fair to the MS engineers, from their background experience with things like DirectText, they would have an ingrained rule-of-thumb that text is slow.

That's because it is slow in the most general case: If you have to support arbitrary effects, transformations, ligatures, subpixel hinting, and smooth animations simultaneously, there's no quick and simple approach.

The Windows Terminal is a special case that doesn't have all of those features: No animation and no arbitrary transforms dramatically simplifies things. Having a constant font size helps a lot with caching. The regularity of the fixed-width font grid placement eliminates kerning and any code path that deals with subpixel level hinting or alignment. Etc...

It really is a simple problem: it's a grid of rectangles with a little sprite on each one.


Supercomputer or not, it's a terminal.

In the real-CRT-terminal days of the 1970's & 1980's of course the interface to the local or remote mainframe or PC was IO-bound but not much else could slow it down.

UI elements like the keyboard/screen combo have been expected to perform at the speed of light for decades using only simple hardware to begin with.

The UX of a modern terminal app would best be not much different than a real CRT unit unless the traditional keyboard/display UI could actually be improved in some way.

Even adding a "mouse" didn't slow down the Atari 400 (which was only an 8-bit personal computer) when I programmed it to use the gaming trackball to point & click plus drag & drop. That was using regular Atari Basic, no assembly code. And I'm no software engineer.

A decade later once the mouse had been recognized and brought into the mainstream it didn't seem to slow down DOS at all, compared to a rodent-free environment.

Using modern electronics surely there should not be any perceptable lag compared to non-intelligent CRT's over dial-up.

Unless maybe the engineers are not as advanced as they used to be decades ago.

Or maybe the management/approach is faulty, all it takes is one non-leader in a leadership position to negate the abilities of all talented operators working under that sub-hierarchy.


Exactly. Fast terminal rendering on bitmap displays has been a solved problem for at least 35+ years. Lower resolutions, sure, but also magnitudes slower hardware.


It's more subtle than that. What the Microsoft engineers are saying is that the console's current approach to drawing text is inherently slow in this particular case, due to the way the text drawing library it's based on uses the GPU. The proposed solution requires the terminal to have its own text drawing code specific to the task of rendering a terminal, including handling all the nasty subtlties and edge-cases of Unicode, which must be maintained forever. This is not trivial at all; every piece of code ever written to handle this seems to end up having endless subtle bugs involving weird edge-cases (remember all those stories about character strings that crash iPhones and other devices - and the open source equivalents are no better). It's relatively easy to write one that seems to work for the cases that happen to be tested by the developer, but that's only a tiny part of the work.


I’m fairly sure noone in question wrote a new font renderer, but just rendered all available fonts upfront with a system library, and uploaded it to the GPU and let it use it as a bitmap.

Text rendering is still done mostly on the CPU side in the great majority of applications, since vector graphics are hard to do efficiently on GPUs.


Simply shaping text using state of the art libraries (like harfbuzz) can take an INCREDIBLE amount of time in some cases. If you're used to rendering text in western character sets you may think it can always be fast, but there are cases where it's actually quite slow! You get a sense for this if you try to write something like a web browser or a word processor and have to support people other than github posters.

Of course in this case it seems like it was possible to make it very fast, but people who think proper text rendering is naturally going to be somewhat slow aren't always wrong.

Saying that text rendering is "largely solved" is also incorrect. There are still changes and improvements being made to the state of the art and there are still unhappy users who don't get good text rendering and layout in their favorite applications when using a language other than English.


> naturally going to be somewhat slow

Naturally slow for a text renderer might mean it renders in 4ms instead of 0.1.


Yes, and for a terminal like discussed in the article, 4ms is considered unacceptably slow


I dunno what framerate you expect your terminal to run at, but 250hz should be enough for everyone.


You are right in the general case. But terminals are a specific niche, not requiring the full extent of text rendering edge cases as a browser, wysiwyg editor, etc “experience”. It renders “strictly”* monospaced fonts, which makes it trivial to cache and parallelize.

* as it was brought up, one might use a non-monospace font, but that case can just use the slow path and let the “normal” people use a fast terminal


I understand the scepticism about such claims, but Casey's renderer is not a toy, and handles a number of quite dificult test-cases correctly. He solicited feedback from a sizeable community to try and break his implementation. The code is vailable here: https://github.com/cmuratori/refterm


From the refterm README:

refterm is designed to support several features, just to ensure that no shortcuts have been taken in the design of the renderer. As such, refterm supports:

* Multicolor fonts

* All of Unicode, including combining characters and right-to-left text like Arabic Glyphs that can take up several cells

* Line wrapping

* Reflowing line wrapping on terminal resize

* Large scrollback buffer

* VT codes for setting colors and cursor positions, as well as strikethrough, underline, blink, reverse video, etc.


The really hard part of writing a terminal emulator, at least from my experience working on Alacritty, is fast scrolling with fixed regions (think vim).

Plently of other parts of terminal emulators are tricky to implement performantly, ligatures are one Alacritty hasn't got yet.


Thanks for the insight.

I have never written a terminal enulator, so could you maybe summarize why fast scrolling with fixed regions is so hard to implement?


Reading the thread itself, it’s a bit of both. Windows Terminal is complex, ClearType is complex and Unicode rendering is complex. That said… https://github.com/cmuratori/refterm does exist, does not support ClearType, but does claim to fully support Unicode. Unfortunately, Microsoft can’t use the code because (a) it’s GPLv2 and (b) it sounds like the Windows Terminal project is indeed a bit more complicated than can be hacked on over a weekend and would need extensive refactoring to support the approach. So it sounds a bit more like a brownfield problem than simply ignoring half the things it needs to do, though it probably does that too.


> Unfortunately, Microsoft can’t use the code

As good as Casey Muratori is, Microsoft is more than big enough to have the means of taking his core ideas and implement them themselves. It may not take them a couple weekends, but they should be able to spend a couple experienced man-months over this.

The fact they don't can only mean they don't care. Maybe the people at Microsoft care, but clearly the organisation as a whole as other priorities.

Besides, this is not the first time I've seen Casey complain about performance in a Microsoft product. Last time it was about boot times for Visual Studio, which he does to debug code. While reporting performance problems was possible, the form only had "less than 10s" as the shortest boot time you could tick. Clearly, they considered that if VS booted in 9 seconds or less, you don't have a performance problem at all.


> Unfortunately, Microsoft can’t use the code

I commented on a separate issue re: refterm

--- start quote ---

Something tells me that the half-a-dozen to a dozen of Microsoft developers working on Windows terminal:

- could go ahead an do the same "doctoral research" that Casey Muratori did and retrace his steps

- could pool together their not insignificant salaries and hire Casey as a consultant

- ask their managers and let Microsoft spend some of those 15.5 billion dollars of net income on hiring someone like Casey who knows what they are doing

--- end quote ---


> Unfortunately, Microsoft can’t use the code because (a) it’s GPLv2

One thing to remember is that it is always possible and acceptable to contact the author of a GPL-licensed piece of code to enquire whether they would consider granting you a commercial license.

It may not be worthwhile but if you find exactly what you're looking for and that would take you months to develop yourself then it may very well be.


Not always. GPL-licensed do not have to have a “the author”. There may be hundreds of copyright holders involved (IIRC, ¿Netscape? spent years looking for people that had to agree when it planned to change their license and rewriting parts written by people who didn’t)


Why talk in such generalities? Look at the github repo. There are only three committers to Casey's repo. I'm sure Microsoft could manage to contact them. I'm also quite sure that Microsoft has the money to entice a commercial license if they so wish.


> Look at the github repo. There are only three committers to Casey's repo. I'm sure Microsoft could manage to contact them.

Microsoft's attitude towards the code seems a little odd. [0]

Unfortunately the code is intentionally GPLv2 licensed and we‘ll honor this wish entirely. As such no one at Microsoft will ever look at either of the links.

Given that WSL exists I can't imagine this is a universal policy towards reading GPLv2 code at Microsoft.

[0] https://github.com/microsoft/terminal/issues/10462#issuecomm...


Yeah the attitude doesn’t really make any sense. How does the license preclude them looking at the code? They can even download it, compile it, and even run it _without_ accepting the license. They only need to care about the license if they decide for distribute it.


Blanket policy to prevent claims of "you looked at our code and stole $importantDetail": only let people look at code you can safely admit to using in your product.


Because the comment I replied to made the generic claim (emphasis added) “One thing to remember is that it is always possible and acceptable to contact the author of a GPL-licensed piece of code”.


Sure, 'the author' may be a number of people collectively, and in that case it's probably not worth bothering.


> (a) it’s GPLv2

Why is that a problem? A GPLv2 terminal would not be a business problem for Microsoft. People would still have to buy licenses for Windows. Maybe they would lose a little face, but arguably they have already done so.

At least it’s not GPLv3 which this industry absolutely and viscerally hates (despite having no problem Apache 2.0 for some reason, Theo de Raadt is at least consistent).


If Microsoft embedded the GPLv2 terminal into Windows, Windows would have to release as GPLv2 (or compatible license). I assume they don't want that.

They can alternatively buy a commercial license, as another user said below.


You should read up on the "mere aggregation" clause of the GPLv2. It allows an OS to include a GPLv2 program without having to put the entire OS under the GPLv2. If the GPLv2 did function the way you seem to think it does, then almost every Linux distro would be in violation, too.


Thanks, I think this is a very important point that I totally missed.


> Unfortunately, Microsoft can’t use the code because (a) it’s GPLv2

That's not unfortunate. Having people who work on competing Free Software is a good thing. It would be even better if Microsoft adopted this code and complied with the terms of the GPL license. Then we won't have to deal with problems like these because they'd be nipped in the bud. And we would set the precedent to take care of lot of other problems like the malware, telemetry, abuse of users' freedoms.


It's the hardest thing about building perf-related PoCs. Every time I've built a prototype to prove out an optimization, I've spent the entire duration of the project watching the benefit shrink, stressing that it would dwindle to nothing by the end. So far, I've been lucky and careful enough that I haven't convinced a team to expend massive resources on a benefit that turned out to be fictional, but I've had it happen enough times at the scale of a p-day or two that it always worries me.


Counterexample: WireGuard. Turns out OpenVPN was massive and slow for no reason and it only took one (talented and motivated) man to make a much better version.


> > for an experienced programmer a terminal renderer is a fun weekend project and far away from being a multiyear long research undertaking.

> You shouldn't automatically assume something is actually bad just because someone shows a [vastly] better proof-of-concept 'alternative'.

Apparently you should. I can confirm that the first quote is a appropriate assessment of the difficulty of writing terminal renderer. Citation: I did pretty much exactly the same thing for pretty much exactly the same reasons when (IIRC gnome-)terminal was incapable of handing 80*24*60 = 115200 esc-[-m sequences per second, and am still using the resulting terminal emulator as a daily driver years later.


>> I have no idea if this is the case here, and I suspect it might not be, but pretty much every time I've seen a developer complain that something is slow and then 'prove' that it can be faster by making a proof-of-concept the only reason theirs is faster is because it doesn't implement the important-but-slow bits and it ignores most of the edge cases.

Even in those cases it usually turns out that the handling of edge cases was considered reason enough to sacrifice performance rather than finding a better solution to the edge case. Handling edge cases probably should not cost 10x average performance.


This seems referenced in the repo itself, see the “feature support” section [1].

That being said, is anyone aware of a significant missing feature that would impact performance?

[1]: https://github.com/cmuratori/refterm#feature-support


Screen reader support[0] may have a noticeable performance cost.

[0] https://github.com/microsoft/terminal/issues/10528#issuecomm...


Can you explain how screen reader support could possibly have a noticeable performance cost?

The screen reader code should be doing absolutely nothing if it's not enabled - and even if it is, I can't imagine how it could affect performance anyway. For plain text, such as a terminal, all it does is grab text and parse into words (and then the part where it reads the words, but that's separate from the terminal) - I don't see how this is any more difficult than just taking your terminal's array of cell structs, pulling out the characters into a dynamic array, and returning a pointer to that.


Not necessarily, often times, especially in big corporations programmers will be incentivized to deliver things quickly, rather than to provide the optimal solution. Not because they are bad at programming, but because they have quotas and deadlines to meet. Just remember that story of how in the first Excel version a dev had hard-coded some of the cell dimension calculations as they were under pressure to close as much tasks as fast as possible.


The one example that comes to mind is file system search.

I am writing this application that displays the file system in the browser in a GUI much like Windows Explorer or OSX Finder. It performs file system search substantially faster than Windows Explorer. Windows Explorer is written in a lower level language with decades of usage and experience where my application is a one man hobby project written in JavaScript (TypeScript).

The reason why the hobby project is so substantially faster than a piece of core technology of the flagship product of Microsoft is that it does less.

First, you have to understand how recursive tree models work. You have a general idea of how to access nodes on the tree, but you have no idea what’s there until you are close enough to touch it. File system access performance is limited by both the hardware on which the file system resides and the logic on the particular file system type. Those constraints erode away some of the performance benefits of using a lower level language. What ever operations you wish to perform must be individually applied on each node because you have no idea what’s there until you are touching it.

Second, because the operations are individually applied on each node it’s important to limit what those operations actually are. My application is only searching against a string fragment, absence of the string fragment, or a regular expression match. Wildcards are not supported and other extended search syntax is not supported. If you have to parse a rule each time before applying it to a string identifier of a node those are additional operations performed at each and every node in the designated segment of the tree.

For those familiar with the DOM in the browser it also has the same problems because it’s also a tree model. This is why querySelectors are so incredibly slow compared to walking the DOM with the boring old static DOM methods.


> pretty much every time I've seen a developer complain that something is slow and then 'prove' that it can be faster by making a proof-of-concept the only reason theirs is faster is because it doesn't implement the important-but-slow bits and it ignores most of the edge cases

It's still a good place to start a discussion though. In such a case, apparently someone believes strongly that things can be made much faster, and now you can either learn from that person or explain to them what edge cases they are missing.


Maybe that's what usually happens but doesn't apply in that case.


This.

My time library is so much faster and smaller than yours. Timezones? Nah, didn't implement it.

My font rendering is so much simpler and faster than yours. Nah, only 8 bit encodings. Also no RTL. Ligatures? Come on.

The list goes on.


Don't assume. Casey Muratori produced a highly correct Unicode renderer with correct VT code support.

Watch the video: https://www.youtube.com/watch?v=99dKzubvpKE


Except that in this case it’s the other way around. This weekend project has better support for things like like ligatures, combinations of Arabic characters, RTL, etc


In case this isn't clear, RefTerm is:

1. Faster in every case tested

2. More fully-featured, including i18n

3. Easier to read and maintain (see for yourself)

4. Shorter

5. Using existing libs, aka interops well

...So none of these typical hand-wavy dismissals apply:

1. "Is it really faster for edge cases"

2. "He probably didn't implement certain features like Arabic and Chinese"

3. "Businesses just wants enough"

4. "Businesses just wants enough"

5. "It probably makes some closed-world assumption"

The performance of RefTerm didn't come from some big tradeoff; it came from using the right perspective and keeping things simple.

Sure, past a certain complexity threshold, you'd have to work for extra perf, but from my observations, folks who immediately jump to the wrong implicit conclusion that "good perf must have required compromises" got conditioned to think this way because the software they work on are already (often unnecessarily) complex.


I don't think dismissal 1 applies anyway - even if RefTerm didn't implement, say, variable-width fonts - you could just build a terminal that uses RefTerm's fast algorithms for the common case, then falls back to the Windows Terminal's slower algorithms for the more general case.


Yup, this. Something I've been ranting about[0] for a while: there is no technical progression ladder. The Senior->Principal-> ... path seems to be a faux-management track, with all the managerial responsibilities and little of the authority. Software is shitty in big part because it's mostly written by juniors, as almost anyone who has any clue moves over (or gets pushed over) to managerial or faux-managerial roles.

I've been thinking about a more charitable interpretation of this recently. Another thing I rant about[1] is that tools we use for programming are not expressive enough. It takes too much work to say simple things, the coding feedback loop is ridiculously long. So how this connects? In a way, junior programmers can be seen as a Mechanical Turk version of a Sufficiently Smart Compiler - it lets the more experienced people skip some of the tedium with the tooling and deal with higher-level concepts, which get translated to actual code by an army of juniors and interns. Except, this doesn't work that well - the coding feedback loop is even slower.

--

[0] - https://news.ycombinator.com/item?id=27417462

[1] - https://news.ycombinator.com/item?id=28568053


This definitely resonates. I've often talked about how the key transition to 'lead engineer' is when you learn how to wield a team of programmers to solve a technical problem, instead of solving it yourself, and to higher levels it's when you learn how to wield an entire organization to some technical purpose.

These ARE much clunkier tools than a compiler.

Another way I've expressed it is how the tool you use to get started on implementing a change shifts as you grow more senior (if you'll forgive an IDE-centric kind of view - this is meant to reflect life in a Big Org - feel free to substitute with appropriate emacs commands)

- Junior Dev: File > Open

- Dev: File > New

- Senior Dev: File > New Project

- Lead Dev: File > New Diagram

- Principal Dev: File > New Powerpoint Presentation

- Staff Dev: File > New Recurring Meeting Request


Love this metaphor and progression, but I thought Principal was ahead of Staff in the Ladder?


There may be a sort of sweet spot in the progression where a programmer can advance to a point where they spend most of their time figuring out the code architecture (maybe with a skeleton) for others to implement the details of, with some code reviews here and there, rather than coding as much themselves (but they still code, especially the critical pieces). This lets them still be a key player as far as programming is concerned but there's also a space for more junior programmers to learn and grow without trashing the whole project in the process. The Godot engine seems to be turning into this sort of sweet spot model with its tech lead. However in a company this seems like an unstable situation, since I've seen such leads advance another step and now the only thing they "architect" is business strategy via all-day meetings, like any other executive, and the last time they coded anything was years ago. You might get them on a rare code review but they've become so far removed from the details they're not as helpful as they used to be, which in turn leads to not including them anymore. This distance hurts again because now that they have at least some influence (not as much as the 'equivalent' managerial track position, 'mysteriously') to address long-standing dev pains they are too far removed from the pain to spend their limited influence fighting it.

My own filter for how likely a company is to fall (or be) in this trap: does the CTO code at all?


>> spend most of their time figuring out the code architecture (maybe with a skeleton) for others to implement the details of

I think a good use of the experienced programmer is to write some critical components well. I'm also with you on system level design: These are the important interfaces, you write this piece and you write that one.

Another example, I have an Oculus Quest 2 VR headset. It was fantastic when I got it in March. Now they've got some glitches where the rendering STOPs for a brief instant and the view is not updated with motion. I'm wondering if this is because John Carmack is no longer full time there and someone compromised some design principle in order to implement something. Once the glitches are in, they're going to be very hard to get back out as time goes on.


Is this not what old-school corporate dev environments did? Grunts filling in interfaces? I’m sure I’ve read this sort of thing before


There is that joke about the University Professors talking about the best high level language, throwing out various languages until one just replied with "Grad Student".

It does make a lot of sense, you describe the problem in a very high level and abstract way and at the end of the process you get an executable out of it.

But yeah, as you go up the ladder, even the technical one, you end up needing people skills more and more than any technical or programming skill. Usually by senior level you're the best that you'll ever be at technical topics (hand wavy description here, don't nitpick), and the only way to progress up is to improve communication. No amount of extra technical knowledge will help you if you cannot communicate well.


Juniors are beginners, learners, apprentices, and are bound to make mistakes and design things poorly. The issue is there are so few high-quality engineers relative to the demand, and it's hard to prove who is high-quality before they're onboard for a few months. So, if someone is high-quality, you really don't want to lose them, so you promote them, pay them more, try your best to keep them on-board. There's a benefit to elevating them out of the weeds, because often they can glance at the weeds and tell someone else how to clean it up easily, without spending their effort to actually do it.

Furthermore, they can glance at 10 patches of weeds, 10 junior engineers, and find the couple that need more guidance than the others. They can leverage their knowledge through others, assessing their strengths/weaknesses as coders in ways juniors never could.


It seems like companies are creating independent contributor tracks to remedy this issue. You can have people who are legendary programmers float around and contribute where they wish.


This is well-said and resonates with me quite a bit. How do we build expertise in an engineering organization where the best engineers are incentivized not to engineer?


I really appreciate this perspective. Better tooling to enable individuals to penetrate more layers of abstraction sound great. To some extend e.g. with cloud providers this is already happening. We have more powerful building blocks than ever. However, I feel the idea of experienced vs. junior programmer is not helping this discussion much. Experience can also hurt to some extend, because you need to unlearn things.


> To some extend e.g. with cloud providers this is already happening. We have more powerful building blocks than ever.

This is not ideal though. Cloud services have a pretty short half-life, and trap you in business relationships that you shouldn't need just to build software.

(I know, I can't stop complaining.)

> I feel the idea of experienced vs. junior programmer is not helping this discussion much. Experience can also hurt to some extend, because you need to unlearn things.

That is true, but I still think the idea is sound - there's much more universal, transferable programming-related experience juniors gain than things they'll need to unlearn later. The problem I see here is that, just as those developers gain that experience, they get pushed out to management / faux-management. In principle, this should lead to at least some improvement in quality - as increasingly better people are tutoring the next generation and directing it - but I feel we've reached a fixed point in terms of software skill, with most software being written at a pretty low level.


Quite right. Not many companies follow Fred Brooks' "surgical team" model and I would like to see more try it.


This reflects a lot of what goes on in Academia as well.

Also, if you think you can counter a "that's not possible" with a working proof of said thing working, think again, most people don't like to be shown they're wrong, particularly tenured professors. That's one of the fastest ways to sabotage your own developing career.

"Yes men" climb the ladder much easier, but then work doesn't do itself so that's the catch. Cue exploitation of interns and other newcomers and you almost have the whole picture on how things get actually done. Hence why "science advances one funeral at a time". Once someone reaches its own Peter level [1], it stops being a productive force and becomes a plug that jams everything behind it.

1: https://en.wikipedia.org/wiki/Peter_principle


Some professors are humble though.

I had a Physics 101 professor who gave a test that had a problem involving blowing a fan into a sailboat sail. The answer to the question was supposed to be that the sailboat doesn't move anywhere because of Newton's 3rd law, but one guy in the class spent close to the entire exam on that one problem showing that it would slowly move forward using a conservation-of-momentum-based approach instead of a Newton's 3rd law approach. The TAs marked it wrong and he got a low score on the test because he spent so much time on that one problem. He tried to argue about it to the professor in lecture but the professor shut him down saying to come discuss it during office hours. So he made a demo using a pinewood derby car with a hand fan glued to it and brought it to office hours and proved that it would move forward. The professor was super humble about it and brought the demo to lecture the next day and publicly gave kudos to the student for challenging the status quo.


Interesting story :-)

I wonder if they sorted out the differences between the maths approaches, so the professor could agree also math theory wise


>"Yes men" climb the ladder much easier, but then work doesn't do itself so that's the catch.

The harder the work, the lower the pay.


I would really be surprised if any tenured professor would shut down a solution to a well known hard problem just based on his ego. More likely the solution is not as well presented, researched or thought through as it seems (I'm assuming we're not talking about astrophysics here).


> I would really be surprised

I wouldn't.


It really depends on the person, but yes you have to get to know them to know how they will react. I find that this is more of a problem in the US, where tenured professors often develop these vast impenetrable egos, while in other countries, such as in the UK or Germany, they are more pragmatic. That said it is still entirely individual, and I knew who to avoid when I had certain ideas in my PhD.

Also, our German group leader asked me to take a feature from my software package because he was worried that the Americans would get upset at an early PhD student showing them up.


Actually I add something extra: I actually know an Indian professor who has become quite senior in the US, and is a more "famous" member of our collaboration, and he actually enjoys a "Guru figure" like perception when he returns to India. He says that literally people treat him like he is a lord or guru. Very strange


This is both annoying and understandable. For the dev team, to dive into this performance optimization would mean punting something else from their roadmap. I suspect they weighed their perceived chances of success against the projected cost and reached a certain conclusion which in retrospect seems wrong. The independent developer did not have that "hurdle" as he simply did this at the expense of whatever other fun programming challenges he may have done those weekends. It's awesome that he turned out to be right. But I suspect if he turned out wrong we'd never hear about it. There was likely no tradeoff analysis on his part in undertaking this project, while the dev team did have to do one.

On the flip side, I have seen developers get really hung up on what is academically difficult rather than what is practically doable. I had a young developer who worked for me get really upset and hung up on the fact that something he was asked to do was NP equivalent, meaning there's no non-exponential time algorithm to implement what I was asking for. It took me weeks to realize this was messing with him and then about 5 mins to dig into the data and show that input length never practically exceeds like 7, so an exponential algorithm was fine. So I can see how a dev could get stuck thinking something is really hard when it really isn't. But I don't think it makes them a bad dev, it's just hard to transcend your own perspective.


I remember clearly when I first gained a real appreciation for the importance of measuring and considering your actual data.

I was trying to optimize a slow project doing some polyhedron dissection on large (~1M polyhedrons) meshes in real-time. As a first attempt I had my sights set on a function which tried to find a certain point on a surface by making an educated guess and then iteratively bumping its estimate in the right direction. I figured "I can solve this analytically instead!".

I did, and it made things approximately twice as slow. After scratching my head and outputting some data I realized my constant time solution required a number of expensive operations including sqrt and the iterative approach ran on average ~1 iteration and at most 3 iterations for just a handful of these millions of triangles.


> I realized my constant time solution required a number of expensive operations including sqrt and the iterative approach ran on average ~1 iteration

Yeah! It's deeply unsatisfying to forego algorithmic optimization but it often isn't necessary.

EG, when I do Advent of Code, I often write a naive algorithm that works correctly on the sample input, and then runs a really long time on the actual problem input. Before I go redo the algorithm, I often run it with PyPy instead of the standard Python, and 80% of the time it terminates quickly enough.

Similarly, whenever I happen to do AoC in C++, my naive algorithms happen to magically be fast enough out of the box so I don't even think about better algorithms. It's unfortunately counter to the intuition that you get from a CS programs that obsess with Big-O notation rather than implementation speed.

Big-O is really important but it seems like the input size where it starts to dominate implementation is much larger than our intuition suggests.

(of course it's also easy to write really bad algos that suck even on small input)


Unfortunately this was a case where the naive algorithm wasn't fast enough so we needed to optimise. But this particular part of it wasn't the solution.

We instead ended up changing our data format in a way which reduced cache misses.


Reminds me of the time I spent ten days building a memory allocator based on a red-black tree to optimize lookup times for an interpreter that needed to be fast. When I compared its performance against a linear search, it failed miserably. Turns out the fixed overhead in implementing a red-black tree vastly outweighs a naive approach when you're only dealing with 20 terms or so.


I'm not sure "It doesn't take a PhD, just years of experience" is quite the rejoinder the author thinks it is.

Aren't they both just ways of saying "It takes someone with a rare subset of highly specific knowledge to solve this problem"?

The MS person was saying "Sure, there's probably a way to make this faster but I don't have anyone on my team who knows how to do that. If I had the budget to go out and hire someone with a PhD maybe I could do it."

That then Casey Muratori could show up and do it in his spare time doesn't really refute that argument, does it? This is someeone who has spent years actually doing hardocre R&D work, and literally published techniques that advance the state of human knowledge in the field of GUI rendering[0]. His resume is kind of a living example of 'PhD, or equivalent professional experience'.

So sure, it might be nice if MS's career structure led to them having more Muratoris on staff, but I don't think the expectation that every product team should have one or two developers of that caliber on it is a reasonable one.

It IS a good argument for why MS should accept open source contributions, though.

[0] https://caseymuratori.com/blog_0001


It's not really what the MS devs said though: "I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation"

This seems to mean, nobody has ever done that, it is a research project (a doctorate thesis must be about something that has never been researched before, at least where I live).

This particular problem cannot be a PhD subject, because it has already been researched, solved and done multiple times, by different persons and on different projects.

Someone can use the knowledge already widely available on the internet to grasp what the issue is, and implement a solution in their own project.

Years of experience aren't a PhD. Years of experience can help you understand a thesis, but only if the research doesn't exist yet, can it be considered a doctoral research project.


So the comment that the 'PhD' reference was responding to was Muratori laying out how he figured a simple terminal renderer could be implemented, using two texture lookups in a single drawcall [0].

In the course of that, Muratori allowed, almost in passing, that of course you would need, in order to do that:

> a glyph atlas encoding the cell-glyph coverage in whatever way makes it easiest to compute your ClearType blending values

Now, it does not strike me as beyond the realm of possibility that that step might be suitable for academic research. It calls for an efficient encoding of cleartype-ready glyph data with comprehensive coverage of unicode glyph space (not codepoint space). That's not trivial, and - as the DHowett reply suggests - it would take a literature review to determine if someone has already accomplished this (and he also suggested that so far as he was aware this was not how any other terminal was implemented, implying this is not simply some well known solved problem).

I mean, sure, he could be wrong about that - it might be exactly how another terminal implements it; there might already be published research on the topic; or it might just be much simpler than it appears. But it's not crazy to, on glancing at that problem, consider that it seems like it might actually require genuine original high level research.

[0]: https://github.com/microsoft/terminal/issues/10362#issuecomm...


I disagree. All this means is just putting the glyph cache on the GPU. DirectWrite (or any other font rendering API you care to use) already implements a glyph cache, and it already deals with filling in that cache gradually whenever you use glyphs not already present in the cache. And it already knows about Cleartype, or whatever form of antialiasing you prefer. There’s no research here.


Oh, well sure if it’s just a caching problem then it can’t be difficult.

Only hard part left would be naming it.


I think the other piece the comment and the rejoinder miss is is: A PhD generally means you've done four years of work post bachelors, and some of that work was novel.

A PhD alone does not make someone an expert in most modern senses in software. The folks the author describes as "experienced" often have more than a decade of working software knowledge from after their education.

Casey Muratori has 30 years of programming experience. That's not a PhD, that's a PhD and then an additional 20+ years of experience.


And to be clear, Microsoft probably has plenty of people capable of writing this kind of code. They just aren’t working on console.

In fact, consider the hypothetical of: what if Microsoft had hired Casey Muratori.

Do you imagine they’d have put him on the console team?

I mean the Windows console, not the X Box console, or the Minecraft console.


Microsoft has _Micheal Abrash_.

He wrote The Book on rendering performance. Michael's super optimized x86 assembly routines for lighting and texturing is the reason we can run Quake on toasters.

Anyway, I don't think they should hire Casey. Specially after that demonstration of not understanding the big-picture and not being a team player...


Maybe they shouldn't, but...

The 'big picture' he hasn't understood isn't just about text rendering in this instance, though. It's also a picture internal to Microsoft that motivates their commitment to certain libraries. I think the disconnect is to be expected, given that he's not currently at Microsoft. It may also be that some of those commitments are not as well motivated as MS thinks (i.e., maybe the additional maintenance burden of some specialized text rendering libraries for the simplified case of terminals is not so terrible and justified by the performance gains). I don't think this shows that at Microsoft he'd be unable to have a good sense of 'the big picture'.

It's also not clear that any of this shows he isn't a team player. The Windows Terminal developers are colleagues of his in the distant sense of 'fellow developers', but they're not 'his team'. He doesn't have pre-existing relationships with them. And his frustration was in being told 'this cannot be done' in a way that was insufficiently clear and convincing to him and came across as condescending. I think it's reasonable to be frustrated under those circumstances, feeling ignored and maybe also like someone is bullshitting you. I could see the interaction being different if Casey had a different relationship to the other developers and they were more invested in trying to loop him in on their whole suite of platform commitments that ground (or trap) them in the current design, especially if he felt like advocacy for alternative commitments (e.g., to maintaining a separate, simplified stack for rendering text on the terminal) might be reasonably considered.

I'm also sympathetic to some of the MS devs here, because some of Casey's tone could have been taken as suggesting that they weren't really making an effort with respect to performance and that could be insulting. I don't think it was necessarily wrong for them to bring that up. But I don't think Casey was abusive, either, and it doesn't seem like anything that went down in that thread is beyond repair to the point that the people involved in the discussion couldn't work together.

If Casey were interested in working at Microsoft I think it'd be silly to rule him out as a candidate based on what we can see in that thread.


* had

https://en.wikipedia.org/wiki/Michael_Abrash

Looks like he's been on Oculus since 2014

The fact that both Carmack and Abrash are at Facebook is endlessly disappointing :(


Do you have some examples of what big-picture stuff Casey doesn't understand? Not talking about being a team-player.


> Microsoft has _Micheal Abrash_.

Michael Abrash left Microsoft 19 years ago.


Would just like to say that:

1. WT is probably my favorite Microsoft product

2. Since this whole affair, the developers have apologized and corrected themselves which is usually unacknowledged in these discussions.

3. The developers have made a number of performance improvements since (most of which are in the latest preview release).

4. This episode is constantly retold and used to bully the developers even after these apologies and this has lead to one of the developers to quit (paradoxically one of the devs who after these posts made the most effort to figure out where the bottlenecks are).


If that's the case, they should really link those developments/retractions/apologies in the comments of that issue. The way the comments end in that issue is a really bad look, and there's no hint to the reader that there was any further developments, either in the code or the dev team.


Maybe, yeah. I don't have everything ready at hand but maybe take a look at:

https://github.com/microsoft/terminal/releases/tag/v1.11.242... (Latest release, check the PRs linked under performance)

https://github.com/microsoft/terminal/issues/10461 (Issue discussing rendering from a glyph atlas)

https://github.com/microsoft/terminal/issues/10462 (This is where one of the contributors quit)

https://github.com/microsoft/terminal/issues/10623 (Bullying example)


It saddens me to see, time and again, the role of leadership painted in such a dire and incorrect way.

Leadership is a lot more than about "shuffling paper" in an "ivory tower".

As a lead (in any decent company), you get to participate in critical design reviews; you get to read and review code; you get to shape the trajectory of others; you get to participate in discussions with the product teams. If you work smart, you get free time that you can use to keep coding. All of this can be extremely rewarding.

You are not serving our domain by painting this kind of picture of leadership. The truth is, you can excel as a software engineer, but you can also excel as a lead. And none is better than the other.

I dedicate this message to the "wet behind the ears" people you are referring to who would never consider the role if they were to stick to this poor vision you're delivering.


> And none is better than the other.

I agree with you, but few companies know how to appreciate those who decided to progress in the technical excellence. In most not taking a "leadership" (aka management) position means your career reaches a dead end.


> It saddens me to see, time and again, the role of leadership painted in such a dire and incorrect way.

I think you misread the post.

I don't see where this is criticizing the role of leadership. Good leaders exists (though, they are rare). I have been led by some of those inspiring people. But I've never met a leader (as good as it was) who continued to be a great software engineer.

The point is that they all were stuck in the "Sort of knows what they are doing" zone. And that's ok for being a great leader. But then, nobody in the company is kept long enough at a position to become what the author describe as "Experienced".h


> It saddens me to see, time and again, the role of leadership painted in such a dire and incorrect way.

It saddens me to time and again be stuck with "leaders" who don't know what they are talking about and think they need to weigh in on everything, even though they add negative value when they do so, because they want to justify their mostly pointless job.

In a 14 year career, I have had "leadership" that is not completely horrible for about 6 months, but that I still would not call good, at best it is mediocre. If only most "leaders" were wet behind the ears instead of bordering on grossly incompetent.

If people's experience of leadership is that it is almost always dead-weight, then the problem is that it is almost always dead-weight, not that people call it out for being almost always dead-weight.

And by the way, I have never met one person who thought they were a good leader that was even slightly so. People who think they are good leaders are almost invariably pompous, arrogant and ignorant.


So have you met any good leaders? Or are you saying that good leaders are nearly impossible to find (and therefore perhaps the job is difficult?)


> So have you met any good leaders?

I have met people who lead well, but I have not met anyone who's job it was to be a leader that lead well.

> Or are you saying that good leaders are nearly impossible to find (and therefore perhaps the job is difficult?)

It's almost impossible to find someone that has a job of being a leader that is good leader. The job being difficult does not explain why the worst possible people end up being leaders, and it also does not mean that we should somehow not call out the phenomena that leadership is mostly feckless and adds negative value.

We should call this out more often, and we should tell people who want to be leaders that they better not suck because there are enough people who suck at it, and we should let them know that going by actual outcomes, they will likely suck, and are even more likely to suck if they think they are good at it.


What is good leadership, in your opinion? What makes someone's job a "leader" job, do you mean management only, or technical leadership roles as well? What is a leaders job?

When you say that "leadership is feckless", do you mean the leaders you've encountered, or that having a role of official leadership or hierarchy is irresponsible or negative?


If leadership does not stand in the way of delivering the product, lets the devs make their own decisions, and can get the organisation above them to go along with this, they’re good leaders in my opinion.


Of course we have all met people who were pompous, arrogant and ignorant.

But there are some good and outstanding leaders out there. If you never had the chance to meet one (or did not recognize them when they were in front of you), I am sorry for you.


> But there are some good and outstanding leaders out there.

There are, though they are just almost never the people who's job it is to be a leader.

> If you never had the chance to meet one (or did not recognize them when they were in front of you), I am sorry for you.

The worst of the worst are the "outstanding" leaders that just "did not get recognized" as such. Not even the worst part of the worst hell would be good fitting for that group.


All that being said, I agree with you: leadership is hard. In 20 years in industry, how many (really) good leaders have I met? Three maybe.

My original point is that the _role_ is a lot more interesting than what the article seems to suggest.


If you read the article, the author is discussing a conversation on Github and developer experience. The article isn't an indictment about academia (it doesn't even really come up at all).

Re the author's competence argument, given how complex software development is as a field, a better approach should be to always assume that there's a person out there who can solve a problem quicker and more competently than your guesstimate. Of course, you should also assume that they're not going to, and that you - with all of your issues, lack of experience, and general :/ 'iness - is the one who's either going to have to fix it or find someone who can.


The worst part is that the kind of arguments he was met with is exactly the kind of arguments people use to get headcount and climb the ladder within corporations.

Spending a little time writing a well designed and fully working solution => Good job, now take the next ticket!

Spending a lot of time explaining why something can't be done with this budget => Promoted to manager!

Spending a year of time explaining why the project need a much bigger budget to deliver features => Promoted to director!

This is the main problem.


This is neither the problem nor what the article is talking about. Your argument is so overgeneralized that i don't even know where to start taking it apart.


Cultural issues are almost always tied to performance evaluations at a company, so it is highly relevant.

If this wasn't the problem then their manager would have reprimanded them for posting such nonsense. But either these objections were made by the manager or the manager agrees with them. Why would a manager who is fine with such bullshit still have his job? Because it goes all the way up the chain, otherwise someone would have cut it off a long time ago.


I understand that different people have different skill sets, that's totally fine. I don't think every single senior developer at Microsoft should have the skillset to fix performance issues like this. That said, however, I do think developers should be able to recognize when a task requires a skill that they do not have. In those cases it's best to reach out to someone within the organization who has that skill. Or, you know, you can be a senior developer at Microsoft and condescendingly claim that it's impossible, or "takes a PhD to develop that". I guess this is what you get when you fill organizations with people who don't care about performance, computer science or algorithms.

I'm going to use this case as another example to point to when people argue that algorithm skills are useless and shouldn't be used when interviewing.


I don't think this is about algorithms.


Yes it is. When you have 2 programs that do the same thing, except one program is orders of magnitude faster than the other, it's almost* always because the faster program is written with better algorithmic time complexity.

(*In some cases the big-O time complexity may be the same for 2 programs, but one program can still be much faster in practice due to micro-optimizations, heuristics, etc. Even in cases like this, people who are good at algorithms will be better at writing these heuristics and micro-optimizations than people who don't care about algorithms.)


In practice things are much more nuanced than what you learn in school and textbooks.

Sometimes you have a convenient library that sort of does what you want, but it does a lot more. Do you just use it, or do you re-implement only the subset that you need which can be optimized to run much faster?

That's not an algorithm question but more of a software engineering tradeoff between impact (how badly your users/business need it faster), resources and priority (how much time you can spend on optimization), and whether you want to maintain that code (as opposed to calling the library making it somebody else's problem). Sometimes the correct thing to do is really to call the slower library instead of writing your own highly optimized routines.

In this case of terminal emulation, apparently the folks at Microsoft wasn't aware of the faster solution, which you could say is an algorithm issue (but that's kind of stretching things a bit -- you surely wouldn't see terminal emulation in an algorithm in a textbook, and the fact that one has memorized the textbook algorithms for an interview doesn't automatically mean they would figure out a better way of emulating a terminal. Presumably Microsoft does some whiteboarding on algorithms as well but that didn't prevent this fiasco from happening). Anyway the concerns I mentioned above are probably still relevant here (the directdraw run thing was a convenient and slow library that apparently did much more than they needed).

Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want, but real world problems are often more complicated than that, and those "skills" don't necessarily translate/apply. For one, there's no general way to prove lower bounds, so a less imaginative programmer might just assume their inefficient implementation is all that is possible, until somebody else points out a better method. People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions -- the candidates expect them and prepare for them, memorizing them if need be. But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.

I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired. I've done a couple years of competitive programming in my younger years, and I can clearly observe that, when pressed with hard problems, there's a clear difference between how well people respond. The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.


> Algorithm "skills" are probably overrated in the sense that people can memorize textbook algorithms and their analyses all they want ...

Memorizing algorithms has very little to do with algorithm skills. You can memorize any algorithm textbook you like, then go take part in an algorithm competition and see how far that will get you (spoiler alert: not far). If you are good at algorithms, you have the ability to construct efficient solutions to problems you have never seen before. And by "construct" I don't mean pattern-matching to previously-memorized algorithms, I mean writing something that didn't exist before you wrote it.

> People are getting technical interviews wrong if (as interviewers) they ask standardized algorithm questions ... But as the interviewer, ideally they'd want to be able to find the candidate who can discover a good solution for a novel problem they never saw or thought about before.

I agree with you 100% here. Good algorithm questions are questions that can't be solved by memorized answers.

> I'd further claim that while some of these skills can be learned and improved through training, there's a different "ceiling" for everyone since a lot of the intuition and imaginative aspects of problem solving can't really be taught or acquired [...] The model in the original article assumes that these kind of skills come with experience, but in my experience that's mostly not true if you're dealing with "hard-ish" problems like how to vastly optimize text layout in terminals.

Again I agree with you 100%. This also why I went in a different direction with my comment compared to the expectations set in the article. As you noted, the writer of the article expected everyone to learn these skills. In contrast, I said that it's ok for different people to have different skillsets.

> In practice things are much more nuanced than what you learn in school and textbooks. Sometimes [...]

This argument is presented way more often than it actually holds water. Yes, sometimes things are nuanced and complicated and whatnot, but we don't have to look at hypotheticals, we can just look at this actual case that we have right here. In this case things weren't nuanced and complicated and whatnot justifying a 40x performance drop when rendering colored text. In this case things really were simple: turns out this was just a horribly inefficient solution (like Many products written by Microsoft are). Furthermore, turns out Microsoft devs truely were incapable of recognizing that a few simple optimizations would deliver orders of magnitude better performance without breaking anything, and without introducing unnecessary complexity. We know this, because Microsoft devs called the prospect of optimizing this performance a "doctoral research project", and then some guy did it in his free time over a few weekends.


> Memorizing algorithms has very little to do with algorithm skills. You can memorize any algorithm textbook you like, then go take part in an algorithm competition and see how far that will get you (spoiler alert: not far). If you are good at algorithms, you have the ability to construct efficient solutions to problems you have never seen before. And by "construct" I don't mean pattern-matching to previously-memorized algorithms, I mean writing something that didn't exist before you wrote it.

Yes that's what I was trying to say. When people talk about "algorithm skills" it's not clear what whether they mean only learning the stuff from the textbook, or whether they mean the ability to improvise on top of that. Sometimes I suspect people don't know the difference themselves either. For example in the context of technical interviews, if the interviewer chooses a typical question like implementing binary search, reversing a linked list etc., they are most likely going to filter for those who memorized (to some extent) the solutions instead of those with ability to create novel solutions.

So about "algorithm skills are useless and shouldn't be used when interviewing", I guess my point is, it's useful, but it really depends on the interviewer not to screw it up by picking a standard question that can easily prepared for beforehand.


> In this case things weren't nuanced and complicated and whatnot justifying a 40x performance drop when rendering colored text. In this case things really were simple: turns out this was just a horribly inefficient solution (like Many products written by Microsoft are).

My original point was that the reason for the performance drop likely wasn't related to algorithms. In practice, algorithms rarely are the issue. To clarify, in practice, what you use 90% of the time are plain arrays and indices / pointers. Occasionally you'll use a hashmap to lookup (distributed) or cache something.

Performance drops in the real world are most often explained by lack of understanding what certain library calls do, lack of understanding what are their performance characteristics, and/or lack of understanding what needs to be done to satisfy the requirements, or what the requirements should even be.

In most real-world programming domains (probably including the issue discussed here), most problems look somewhat like the following example: Someone reads byte-by-byte from some input stream by calling the read() system call, instead of using a buffered variant (like stdio fgetc()). They might not even notice that they are doing this, because the calls are hidden below a library abstraction. Now for each byte read, there are ball-park 1000 cycles wasted due to system call overhead and other related performance decreasing effect. So there is an overhead on the order of 1000x maybe. This leads to terrible performance, but it's not related to "algorithms" (asymptotic complexity is not the issue here, we're just pumping data).


> My original point was that the reason for the performance drop likely wasn't related to algorithms. In practice, algorithms rarely are the issue. To clarify, in practice, what you use 90% of the time are plain arrays and indices / pointers. Occasionally you'll use a hashmap to lookup (distributed) or cache something.

This doesn't make any sense. It sounds like you're implying that algorithms are not... using plain arrays and pointers? But that's exactly how algorithms are usually constructed. Even the HashMaps you mentioned are typically implemented by using arrays and pointers. If you use arrays and pointers to write something that, let's say, has O(n^3) time complexity when the same operation could be written with O(n) time complexity, then you have an algorithm problem.

> Performance drops in the real world are most often explained by lack of understanding what certain library calls do, lack of understanding what are their performance characteristics, and/or lack of understanding what needs to be done to satisfy the requirements, or what the requirements should even be.

This is such a weak excuse. If you don't understand why a library call is crushing your performance, then stop using that library. People typically use libraries to do the tiniest of tasks, tasks that could often be solved with custom code that can be written in an hour. As the result of this attitude we have terminals that have performance issues... for rendering colored text. That's insane! Sometimes it feels like this whole field is just people gluing together pieces made by other people, with no understanding whatsoever about how everything works. There's no excusing for this behavior.


No need to get condescending.

> It sounds like you're implying that algorithms are not... using plain arrays and pointers?

Ok, but no, I did not mean to imply that. What I mean is that most programs are either straightforward linear scans over whole arrays, or working on data that was specifically requested by some index requested by a separate entity. Using a hashmap to lookup items faster is almost at the ceiling of complexity of most real-world programs.

And really, that's sufficient to do most things at max speed. "Algorithms" (meaning stuff like interval trees, Fenwick trees, LCA computation, square root decomposition...) is irrelevant academic exercise for the vast majority of applications (including kernels and games).

At the stage of using the most optimal algorithm (which is probably so simple that it doesn't deserve that name) we're still frequently dealing with factors of 10x to 1000x of inefficiencies - optimal code from an algorithms / asymptotic standpoint but just badly implemented.

> Sometimes it feels like this whole field is just people gluing together pieces made by other people, with no understanding whatsoever about how everything works.

That's exactly how it is. To people who want to do better I recommend to check out programmers like Casey Muratori (original poster of the github issue), and to check out Handmade Network community for example.

> There's no excusing for this behavior.

They didn't know better. Not particularly flattering though.


> No need to get condescending.

I apologize.

> Ok, but no, I did not mean to imply that. What I mean is that most programs are either straightforward linear scans over whole arrays, or working on data that was specifically requested by some index requested by a separate entity. Using a hashmap to lookup items faster is almost at the ceiling of complexity of most real-world programs.

Ok, now I see what you mean. I mean, I disagree with you, but I understand your point now.

> And really, that's sufficient to do most things at max speed. "Algorithms" (meaning stuff like interval trees, Fenwick trees, LCA computation, square root decomposition...) is irrelevant academic exercise for the vast majority of applications (including kernels and games).

Yes, all of the examples you provided are irrelevant for your average project. I don't mean that your average project would run into a need to optimize range queries with Fenwick trees. I mean your average project will run into situations where someone needs to write a simple O(n) list traversal, and they somehow manage to write it as O(n^2) because they have no understanding or interest towards algorithms. If the organization has staffed at least some people who care and have algorithm skills, they will spot these "low hanging fruit inefficiencies" and fix them.


I feel this is missing the elephant in the room, which is that experts are not interchangeable. They have typically specialized in whatever they have spent lots on time doing, and so developers in different sectors develop a lot of different skills that are typically not very useful for developers in different sectors.

A result from that is that what may be trivial for people in one sector is hard for people in a different sector, and vice versa. Often people will not know enough about sectors different from the one they operate in to know what is hard or not in other sectors.


Counter argument: If the experienced programmers stay where they are, software development will forever remain craftsmanship. A single developer can only tutor so many juniors.

In fact, I think that teaching and encouraging to learn is the most important thing in SE. And that has to begin with leadership figures that don't believe their own knowledge gets obsolete after one year just because the latest framework got released.

SE should not be an art form. It should be founded in solid science and abstraction should allow us to keep our knowledge applicable. You're not a bad engineer because you never touched the latest hypetech or programming language. You are a bad engineer if you only consider yourself a "frontend developer with react on chrome".

All IMO, of course.


> SE should not be an art form. It should be founded in solid science and abstraction should allow us to keep our knowledge applicable.

Software engineering can never be anything but an art form!

A lot of fields of endeavor have both mechanical "science-based" components and creative "art form" ones. An inexperienced chef can get some productive cooking done by mechanically following the steps of a recipe someone else wrote--cutting up vegetables, operating the stove, and so on--without making culinary decisions of their own. Experienced chefs, though, come up with ideas for things to cook, which you can't effectively do just by following some procedure. You need to have some creative je-ne-sais-quoi.

In software engineering, we have no long-term need for the first category of people. Any programming task that doesn't require human creativity can be automated. (When's the last time you decided which CPU registers corresponded to which variable names in your program?) An effective programmer uses abstraction for precisely this purpose. They spend all day, every day on making the hard decisions, and they let the computer handle the rest.

This is why Enterprise Software Architecture techniques have always been and will always be doomed to fail. You can't have senior engineers come up with rules that junior engineers can just mechanically follow to write good code. If you could, you wouldn't need the junior engineers, since you could just tell a compiler or code generator or something to follow the rules instead. Everyone who works on a codebase needs to be capable of making good decisions about code, in a way you can't boil down to science and checklists, or the best they can ever do is waste their own time producing repetitive boilerplate.


If you look closer everybody who create something on work, like every craftsman, is a kind of an artist. But it pays very badly, so even for people who like the job, are forced to step up into the office, because of the better pay. That is part if the reason, why you almost can't find a good plumber or electrician. If you can count to 2, you are forced out of the job. As you long, as you write software by yourself, you are not much different than a plumber .. you work with your hands.


> As you long, as you write software by yourself, you are not much different than a plumber .. you work with your hands.

There is one small difference - a plumber needs to design some piping and then execute it which takes more time than designing. After writing code, programmer just presses a button and results are made by computer. Otherwise, there is some overlap. There are many programmers, who are not more skilled than a skilled plumber or whose job is not more crative than that of a plumber, but we call them "juniors".


Yes, of course and copying and use something is almost for free in the digital domain vs real world.


I get this sentiment often, but I think it is incorrect. Software developers should not consider themselves artists - they should consider themselves designers. Yes, a designer has some freedom to express themselves, but mostly they should satisfy the customer. Think of someone designing a lamp for IKEA. The functional and nonfunctional constraints outweigh the artistic freedom by far. Software developers have one advantage over the poor soul that has to come up with that lamp, though: For us there is serious, hard, science that tells us what can and what cannot work. The IKEA designer must rely on market research.


I think you've got heuristics and art form mixed up


Good software development can't be anything but craftsmanship, or art if you think of "art" as in "artisan".

Think of what a developer is, and what a computer is. A computer is a dumb machine, it lacks creativity, it is only good at executing orders, but that, it does it very well, and very fast. A programmer is a human, creative and able to see the big picture, his task is to inject his human qualities into the dumb machine, so that the machine can do things that are useful for humans.

So, programming is the opposite of applying recipes, if there is a well defined recipe, that's for the machine, not for the human. If programming is not an at least a little bit of an art form, it means the programmer is doing the job of the computer, which is a waste of valuable human time on something the machine can do better.


I'm somewhere in the middle. I see software development as an art form, however it's not disconnected from its solid scientific roots.

Sometimes, the art part is for fun (i.e.: your weekend project), sometimes it's for pushing systems to the limit (i.e.: scientific programming, demoscene, or where performance is really needed).

In my case, the art is exploiting the hardware or the theory in a way that it works smoothly and very performant for the cases you have.

This doesn't mean the deadlines doesn't matter, and explicit, slow, but very robust code doesn't have its place. On the contrary, it has many places. We need to make the correct trade-offs to get our job done the best way possible.

On the mentoring side, training people is hard, because of both parties. Low ego, open mindedness and being open to being wrong is very important. I try to mentor some people around me, and I encourage them to go further than I did. Lastly, I find passing the "old way" to new generations very valuable. While these old ways are inevitably lower level and less hip, they're tried, stood the test of the time, and generally performant enough (or very performant in some cases).


Just a side note, your idea of art is a bit naive. Take for example the work done by Christo and Jeanne-Claude, citing wikipedia:

> Their work was typically large, visually impressive, and controversial, often taking years and sometimes decades of careful preparation – including technical solutions, political negotiation, permitting and environmental approval, hearings and public persuasion.


I pity the developer (or maybe manager) at Microsoft who wrote the GitHub comment on which this article's title is based. (And no, I never met him when I was at Microsoft.) If he's in the Seattle area, he's likely not even awake yet, but when he is, I expect he's going to have a bad day. Assuming he's not already sick of the Windows Terminal versus refterm drama, it would be interesting to read his perspective.


As the guy responsible for that comment (and the dev lead for the team! Hi!) I can say it haunts me pretty much weekly. My parenthetical-laden snarky sense of humor, my use of italics to indicate speech-like emphasis and pretty much everything else about that comment has been dissected to parts as small as possible.

In the end, though? Yeah, I was completely wrong. I don’t know much about graphics engineering and I’m glad(^) that we were shown up by somebody who does.

I’ve been working in software for nearly 20 years, and I’m firmly in the “knows enough to be dangerous; doesn’t know everything” category the article is missing. It’s not, as some have characterized in this comment section, because I moved into leadership; rather, it is because there is not a single axis on which to measure all engineers. I’m inexperienced in graphics and text layout, and my critical mistake was not respecting the experience somebody else brought to the table. In other fields, perhaps I fall higher on the experienced/wet-behind-the-ears scale.

(^) I may flatly disagree with his presentation and his group of fans taking every opportunity to slander us, but I’m happy that it’s generally possible for us to improve performance and engineering efficiency using the technique he described. :)


> his group of fans taking every opportunity to slander us

Conversely, I think I went overboard defending you and your team, to the point of making my own over-broad generalization about game devs in another subthread [1]. I'm glad it's practical to get the best of both worlds in this case.

[1]: https://news.ycombinator.com/item?id=28744315


Such is the nature of a flame war. It's also REALLY hard to take thread after thread of discussion about how this thing you built is totally shite, and not try to respond in some way.

It's hard to clearly communicate on the internet "we could do better here, we'd love some help, but you don't need to be an asshole about it". Once casey was set on his warpath, I think it was just too late.


My sympathies to you and the rest of the WT team. I think you’re doing a bang-up job under difficult platform constraints (that most people don’t appreciate), and the amount of vitriol spilled over this one interaction has been really disappointing.

Just remember that there’s a huge number of people who appreciate the work you’re doing to modernize terminal experiences on Windows (nearly 78k GitHub stars!)


yeah, i would be really sad if I was at the receiving end of this.

One guy is not 100% correct one day (because we overworked devs always are?), and the next day someone has written a blog text painting him as a subpar dev and now he is on front page of HN used as an example...


Yea, it's been a generally shit experience. Knowing the sense of humor of the original "PhD" comment, yea, I read that as a light-hearted jest, which was definitely lost in transit. Communicating on the internet is hard. Miscommunications are easy, and this one happened to spiral quickly. It saddens me that this whole incident was used to balkanize the community rather than to work constructively together.


The very ticket [1] is locked by this epic ending:

> You were overly confident in your opinion, but I hope this website helps you understand that it's actually really damn hard.

> The reason your program shows a high FPS under other terminal emulators is simply, because their rendering pipeline works independent of VT ingestion. Gnome Terminal is not laying out text faster than your display refresh rate either. And of course, again, this is something WT will probably do as well in the future... but this project is nowhere near as old as Gnome Terminal is.

[1] https://github.com/microsoft/terminal/issues/10362#issuecomm...


The github issue is a gem.

The reporter is basically being fobbed off by being flooded by random buzzwords and data points, none of them specific, but he keeps trying to drill down to specifics by asking questions.

The tone gets quite condescending towards him.

He gets out of all this with flying colours. He should consider adding that Github issue to his resume.


He is already well-known enough in his field past the point of writing a resume, and I don't think he really would want to enter the corporate IT world that he intensely despises (even with lots of cash). Plus he's already busy on his own working on various projects: the video game 1935 (which we don't really know that much of), Star Code Galaxy (a CS course focusing on fundamentals), and Handmade Hero (A live recording / tutorial of making a game from scratch). He's already really


Damn, my brain died while writing that one.


Yeah, typical Microsoft developers (https://github.com/microsoft/terminal/issues/10362). That issue was pretty mild, they felt threatened and used CoC principles ("combative") to shut down the threat.

This is how they also ruin "open" source projects like Python now.


That's a pretty terrible interpretation of events here. Along side the "combative" phrasing was a bit of self reflection

> Setting the technical merits of your suggestion aside though: peppering your comments with clauses like “it’s that simple” or “extremely simple” and, somewhat unexpectedly “am I missing something?” can be read as impugning the reader. Some folks may be a little put off by your style here. I certainly am, but I am still trying to process exactly why that is.

That post is literally not trying to shut down the conversation, it's simply reflecting on the tone of the commentator. elsewhere in the comment he even prompts for more discussion on the topic. At no point is the CoC invoked.


> I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation as “extremely simple” somewhat combatively…

Yes, it might be seen as combative since it amounts to saying "you guys seem to be clueless". When proven in experiments, it means that the combativeness was justified.

Imo, the language in this quote is almost unacceptable. It's a roundabout way of saying "stand back little peon, you are not important enough to be thinking about problems that concern us."

It's inevitable that when people overestimate their competence and judgement, the moment when humility is forced on them is experienced as painful. Adjust prejudices and move on.


Saying that is is a multiyear long undertaking was probably a little silly and makes it sound like the person who said that can't have been involved wi4th the software otherwise they would have known this but there might also be another explanation that we have seen, particularly with Microsoft:

You can't sell the things that make people just like the software, only new headline features do that.

Example 1: When you open visual studio 2019, you get a search box by default that allows you to type in your project name and it looks for it in the MRU list. Except this is intolerably slow! I suspect it might not be the searching itself, perhaps a tonne of threads are doing something but it is terrible. ANY search algorithm looking through a list of no more than 50 things should be instance, theirs takes up to around 10 seconds and does fuzzy search by default! Why don't they fix it? They wouldn't sell any more units by fixing it and you won't upgrade to the newer improved version.

Example 2: When you install nuget packages in visual studio on netfx projects, it modifies the assembly redirects in web.config by re-writing the entire file even if there are no changes. Oh, and it adds extra spaces and stuff so that the diff tools think every single line has changed. I have to manually delete all of the entries, build it again, let visual studio warn me about missing redirects, double-click the warning and VS writes the file in the original style revealing the maybe 2 actual changes. This is total horsesh*t but they won't fix it. Why? You should be using dotnet core, even if your project is 5 years old and established.

It's a shame because there are lots of things to like about MS tooling compared to lots of competitors but each of these things will be a coffin nail that eventually will cause people to jump ship.


The difference here is that the developer of RefTerm is an experienced, professional developer passionate about his craft and building good software. Microsoft is a faceless entity composed of people who mostly don't care about what they do during the day and are just getting by so they can get paid. There's no shame in that. But there is shame in patronising someone when they demonstrate something to be false and locking the thread on GitHub such that no further discussion is possible...

Microsoft is sluggish and delivers products that do the job and nothing more. Sadly, that's the modus operandi of most software companies. That's why nimble little startups are still managing to steal pieces of various markets.

I use Windows Terminal quite often and have definitely noticed its terrible performance. So this is definitely something visible in the real world and not just nitpicking.


> Microsoft is a faceless entity composed of people who mostly don't care about what they do during the day and are just getting by so they can get paid.

Regardless of what you think about Microsoft the company, please remember that the individual workers and managers are people who can be hurt by our tendency to form self-righteous mobs. And according to another commenter [1], one of the people working on Windows Terminal was hurt to the point that they quit.

Full disclosure: I'm inclined to defend the Microsoft team here because I was at Microsoft (on the Windows accessibility team) for a while. I don't think that I or my former teammates fit the stereotype you described. And while I never met any of the developers on the Windows Terminal team (at least not knowingly), I could very easily have crossed paths with one of them; I believe my team used to be in the same building as theirs. So I'm inclined to be charitable.

[1]: https://news.ycombinator.com/item?id=28746624


This might get me some shit but running away like that isn't a good trait to have. Not for Microsoft, but also not for themselves. This will probably be in the back of their mind for a chunk of their career since they have no closure.

I would expect somebody working on a greenfield project at one of the biggest software companies in the world to tackle issues head on with grace.


Those individual workers hurt the project and the community around it by setting the precedent that you need to create your own competing product to verify that bugs can be fixed without PhD research.

Nobody should be harassed over their open source project, but that doesn't mean actions don't trigger reactions.


Nor does it mean that a simple mistake could not be remedied, as it was.

And we should be forgiving.


inexperienced developers are unable - or just rarely - ask themselves the question:

"is the current performance in the right ballpark of where it should be?"

here it's obvious - muratori took one look at that thing and just knew that "no, 2-3 fps is definitely, completely off". i, too, had a couple of those moments where i got my hands on some code and thought: "that just can't be right" and then found O² algorithms, byte-by-byte io copy routines, database abuse or duplicated work.

if you don't have the rough reference points of what should be right, you often just don't question correctly working code.


In my experience, basic coding skills can really atrophy in senior developers who quit writing code years ago and are full-time architects/managers.

I’ve interviewed more than a handful of extremely accomplished people with decades of experience as developers and then engineering managers who couldn’t answer extremely basic coding questions (we’re talking “efficiently find duplicates in a list” or “compute Fibonacci numbers” basic). These people never could get past the O(n^2) brute force solution for finding duplicates (or the brute force compute every Fibonacci number starting from 1, 1), because they hadn’t thought about even basic algorithmic complexity in so long. And when people like this are giving orders from the top, you can see how software can get so slow.

I have no doubt that when these people were actually still coding, they could have answered my questions in their sleep. Quitting coding for engineering management really does one a disservice IMO. To this article’s point, I’ve also had the pleasure of working with people with decades of experience who never quit coding even as senior managers and their level of competence and productivity is astounding.


This is why I am considering leaving my current position. "oh we have something you can work on, but first can you..." 3 straight years of this...


I really cannot fathom how one could forget fundamentals.

I can prove the average complexity of quicksort with simple math transformations of initial recurrence equation which I do not even have remembered but I'm sure I could derive. Last time I did this was maybe 8 years ago if not more.

Yet somehow an accomplished dev cannot spot accidentally quadratic behavior when coding in something that is not Haskell.

IMO, the average dev and average accomplished dev is no longer the kind that knows the fundamentals. Not that it's necessary in all cases but eventually everything will look an equivalent of a doctoral thesis when the average codes the spaghetti mess.

Recently I was reviewing code with map(find).sort(indexOf) (O(n^2 log n + n^2) behavior) when it could have been a simple dict; map(dict.get) (O(n)). This person has 10 years of coding experience. When looking at the resume, the dev is pretty accomplished, productive, and worthy of the salary. But I assume the knowledge of fundamentals was missing from the start.


Reminds me of the Expert Beginner: How Developers Stop Learning: Rise of the Expert Beginner https://daedtech.com/how-developers-stop-learning-rise-of-th...


PhDs often make the worst possible developers. People with PhDs (especially in Mathematics) are often lacking the most critical characteristic of a good developer; they don't know why they do things. Being good at math requires being able to solve problems whose solutions often do not have any practical uses or whose practical uses may not be clear. This is very different from how software problems should be approached. Software development is more about finding the right problems than it is about solving problems; to find the right problems, you need to understand the 'why' in as much detail as possible. Every line of code should be easily justified by how it brings the system closer to achieve its purpose. The PhD diploma itself doesn't have much practical use in financial terms; it's not a good investment to get a PhD for purely practical reasons. Detatchment from practicality is the worst attribute someone can have as a developer.


> PhDs often make the worst possible developers.

I personally don't agree.

> Software development is more about finding the right problems than it is about solving problems; to find the right problems, you need to understand the 'why' in as much detail as possible.

This is what a software or CS PhD makes you understand. You know the why's to the lowest level possible.

> The PhD diploma itself doesn't have much practical use in financial terms; it's not a good investment to get a PhD for purely practical reasons. Detatchment from practicality is the worst attribute someone can have as a developer.

In my experience it's the opposite. After I got mine, I realized that I can answer these "Why"s faster, infer my own knowledge and can write better code easier and in less time. The whole process got much entertaining too.

So, YMMV.


> Being good at math requires being able to solve problems whose solutions often do not have any practical uses or whose practical uses may not be clear to you. This is very different from how software problems should be approached.

I don't really agree. Being good at mathematical research requires recognizing the "core organization structure of a system" and solving the "core problem at hand". I'd argue that it's essentially the same in programming (and business) as well. The main difference is that a mathematical researcher is able to ignore non-pertinent information, while in practice those issues (usually) need to be dealt with.

That said I wouldn't say that PhDs (math or otherwise) make great developers. Their skill in development is a result of experience in developing software within the software industry and not something they gain simply due to "being intelligent" or by working on their on on their POC research implementations. But I'd point out that it's basically the same for professional software developers as well.


However, there are probably infinite organization structures describing a complex system and there isn't any ontological reality to Occam's razor. Programming is really much about deciding where and at what level to allocate partial structure, which again informs the broader picture of the core organization – and some of this may be even intuitive.


Are we actually in disagreement? I could just as well describe math as "deciding where and at what level to allocate partial structure..." in the same way. Mathematical researchers also build intuition over time allowing them to see certain big picture ideas more easily.

If I really take a step back and re-think this whole thread, I'd say the main different between mathematical researchers and programmers is just incentives. Funding and output are just fundamentally measured differently and the work is done very differently with respect to cooperation with others.

Still I'd say that the thing holding back mathematicians from being good developers is simply that research mathematics does not provide development experience.


A bit if a generalisation. For a different view: currently almost all the core devs for NumPy and SciPy (those are the ones I know personally) have doctorates.


I think that we can agree that if you want to become a generalist programmer, able to hop from random web startup to random web startup, then a PhD is probably not that useful. If however you want to become more of specialist, say being able to develop novel solutions to complex fluid dynamics problems then a PhD is very useful place to start.


Meh, it's better to treat software development and underlying academic competencies, like mathematics, as completely separate fields.

Sure, the latter helps the former, but rear in seat experience is best for (I'm assuming) both.


I have a PhD (physics), and I worked in a quant team with maths, physics, engineering and comp sci PhDs, and I can assure you the bell curve demographic is identical there as it is in a department of non PhDs. There are some very good developers, the odd mythical "10x" developer (in this case, someone that really just writes repeatedly same thing in every job to be honest), and some very poor developers.

> The PhD diploma itself doesn't have much practical use in financial terms; it's not a good investment to get a PhD for purely practical reasons.

I don't know if you say this because you have don't have one, or haven't worked with many (not just one) PhDs?

In the UK at least, your starting salary will be significantly higher than just a plain degree, and it also opens up a broader job market (in the City, at least, and aerospace and defense) where PhDs can be a requirement in some jobs because (junk) degrees are now ten-a-penny.

You learn writing skills, and analytical skills both of which are financially useful and practical.

I get the feel that what you're implying is that PhDs are less imaginative and perhaps less likely to be entrepreneurial. In that case, you may have a point.


My point is that, on average, in my experience, they are more interested in big complex theories and hypotheticals than in reality and all its tedious subtleties. They can miss the point that all these tedious subtleties add up in terms of complexity and shouldn't be scoffed at.

This infatuation with complexity is itself a big problem. I've observed a lot of PhDs neglect the cost of complexity. Choosing complex solutions with significant practical drawbacks over simpler ones in order to get some tiny gains in performance (for example) or to get some specific characteristics which they feel is important but which the average user doesn't care about at all.


I find this generalization a bit hard to believe. Good academic researchers should be obsessed with knowing why they do things. After all, researching the state of the art and identifying a knowledge gap is what they do. And they should also be quite good at documenting the how since they write a lot of methods sections.

Maybe not in mathematics as I have no idea what a math PhD thesis looks like but for all other fields I know I'd expect a good researcher to be exceptionally good at giving a reasoning for why they do something and also at knowing what other approaches were tried by other people and what the pros and cons are.

But I find the comparison PhD vs. developer a bit unfair. I agree that you become a better developer if you develop for 4 years instead of doing a PhD. Even in CS, a PhD entails many things that are not developing, namely writing papers, getting grant money, teaching etc.

However, I also agree with you that if the goal is to become a better programmer, a PhD is not the way to get there (imo).


If you're hiring a PhD Mathematician and expecting them to be great developers, then you're quite frankly doing it wrong. Don't get me wrong there are great developers with a PhD in mathematics, but that's got nothing to with their PhD (in general). You hire a PhD mathematician because you have math problems relative to their expertise that you need solving, and if they're not great programmers you pair them with a great programmer to help implement their solutions.


According to the Amdahl's law: Let the smartest do the hardest. According to the Gustafson's law: Add more work if you want to keep the others busy.


cmuratori is great software engineer, for sure, but, god, this bug report poisons the well right from the start (see below).

Also the report is actually quite bad, because 40x performance drop, that is stated, could be actually nothing. If something takes 1 millisecond instead of 25 microseconds (numbers from the head, to illustrate the point), it might be an issue, but also might mean nothing. This is the thing that requires clarification, from my pov.

Regarding poisoning the well: instead of writing

" Expected Behavior

Despite the increased parsing load, modern CPUs should not have a problem parsing per-character color escape codes quickly. I would expect the performance of the terminal to be able to sustain roughly the same frame rate with per-character color codes as without, and if there was a performance drop, I wouldn't expect it to be anything close to 40x. "

he could just write

Expected behavior

Frame rate of output with per-char color codes is similar to non-colored output.


> 40x performance drop that is stated could be actually nothing.

It isn't though. The performance difference between Windows Terminal and refterm is dramatic enough that he could demonstrate it in a video.


And that is why I'm saying that initial bug report could be done better.

For example, out of curiosity I did the test from the last message: doing "cat big.txt" in WSL2. In Windows terminal it took 1.5s, but in Kitty terminal, that is running under wslg (same ubuntu version), it takes 0.34s. Powershell, from the other side, is awful (20-25s).


I personally disagree. I don't see any poisoning in the original bug report; furthermore, I think it conveys the issue (and some of its subtleties, such as referencing "modern CPUs") much more clearly than the version you propose.


Agree, pretty unfriendly tone. I would expect from a more senior developer that he or she has a high enough social intelligence to understand how to confront others with their findings in a clear but friendly way.


> Expected Behavior

Whether this is interpreted as unfriendly depends on the stance of the reader. I can easily make this sound somewhere between "celebratory" or "intriguing" when I read it. If I were the author and performance/bug-reports somehow meant negative consequences for me, then it would sound hostile.


i find the expected behavior paragraph perfectly fine.


> Due to the way the programmer’s ranking system works, an average company will have a big crowd of inexperienced people and very few who can tackle complex problems. For this reason, the vast majority of software is written by the ones who are still learning the ropes.

I wonder if another reason is just that it’s such a growing industry. The number of CS grads each year are growing, and I know tons of people who sidestepped into the profession from other degrees as well. The number of jobs too. If the job market is twice the size it was a decade ago, half the employees are going to be new no matter what. Perhaps hilariously, that huge demand means high wages, means more early retirements, means fewer senior people too.


> it seems to me that the software industry doesn’t value technical excellence as much as it values having warm bodies report to you

Naturally. Computers don't pay, these wet bodies pay progressively for values they perceive. Technology is merely an instrument to add amounts of currency value to products that it is to hold. Also, a business that relies on technical excellence of small groups of individuals is not a business well conceived.

Logical complexity impregnated into an artifact, logical or physical, is a completely separate phenomenon to currency values it holds within the system of human economy. Took me way too long to get it. The fact that the two are considered same is itself probably something worth discussions.


I tihnk I generally agree with this, but also wonder what the alternative is.

If you step outside your own industry, do you want plumbers that have a hobbiest appreciation of the intricacies of fluid dynamics, or do you want someone to come and unblock your toilet in the middle of the night?

There's possibly a middle ground where the geniuses invent toilet technology in such a way that it is easy to install and maintain, and "normal" people do all the other boring bits, like stocking warehouses, transporting it, selling the house it sits in, answering the phone etc. It feels like we've generally got there in both plumbing and in software and is that a bad thing overall?


I just want Microsoft to develop a proper terminal application and not handwave away complaints about performance with "that's a PhD right there". Even if it was a PhD, I'd then expect a PhD to be put to the task, because this is a crucial component of the operating system.

Realistically though, a PhD is the opposite of what you want here. A PhD is almost by definition an inexperienced programmer. It's also someone who is likely to fall in love with their ideas, putting aesthetics or purity above practicality and effectiveness.


I was more responding to the article, which used that specific example as a jumping off point to talk about management structure in software development generally.

I don't use Windows, but I do use Kitty terminal (https://sw.kovidgoyal.net/kitty/) and from my interactions with that project and trying to get code ligatures working in my setup I've learned that terminals are really complex, both for technical and legacy/ecosystem reasons so my gut reaction is to support the MS engineers, without having read deeply into that specific issue.

edit: having read some more of the original ticket, I'm more firmly on the MS developers side. I think the other person is both rude and uninformed about what they are talking about, which is a terrible combination.


The ticket discussion brought to light that it is a rendering problem. The other person wrote a terminal renderer that has none of the performance problems and all of the required features. I would say the MS developers have been proven wrong.


Possibly the thread has been edited, but I see the original poster claiming that, and it seems a popular view around here, but I don't actually see the evidence that it actually happened in the thread itself. Are we talking about this thread?

https://github.com/microsoft/terminal/issues/10362


It's here:

https://github.com/cmuratori/refterm

That issue was closed but the current open issue which acknowledges this is a real problem is here:

https://github.com/microsoft/terminal/issues/10462

Here is a comment on the code directly from a contributor at Microsoft:

> @AnuthaDev We‘re quite aware about the code and are extremely happy about what was created there. I mean it seriously: It sets a great goal for us to strive for. Unfortunately the code is intentionally GPLv2 licensed and we‘ll honor this wish entirely. As such no one at Microsoft will ever look at either of the links.

> I‘m the person who‘s tasked with building an equivalent solution and you may subscribe to #10461 to get updates of my progress. Unfortunately our existing project isn’t straight forward to modify the same way we could build a terminal from scratch. A lot of parts of this project must be rewritten and as such it‘ll take a bit for us to catch up. But we will - it’s only a matter of time.

https://github.com/microsoft/terminal/issues/10462#issuecomm...


Maybe I'm misreading that comment from the MS devs but it seems like a very politely worded, "fuck off" rather than a "yes, you were right".

Could be wrong, but even if I'm right about that, I'm still on their side I think.


This would be a nice thesis. Unfortunately we have software like memcached, redis, nginx, Envoy, Kafka, Zookeeper, and more that come straight from professional programmers which makes this thesis more complicated than this post makes it sound. Perhaps Occam's Razor should apply here.


That software from a pool of millions of SWE across the world. And importantly this software created outside of corporate structures! I think these exceptions prove the rule.

Most SWE are working for companies that don’t make money reinventing architecture. Leading a bunch of people to use existing tools to sell something to customers is probably where most of the value comes from in SME world and probably most of the big company world too.

Everywhere I have worked the very experienced people are probably doing the “the same year repeated many times” rather than a pyramid of experience taking on more complex engineering challenges.

There is a lot of money in glue code.


"Everywhere I have worked the very experienced people are probably doing the “the same year repeated many times”

Yeah...


Maybe it takes PhD to develop that through the Microsoft development process.


At one of my previous companies they hired two engineers specifically for performance. They were sharp guys. Problem was, they were put on their own team/silo and basically no one involved them in any of the work. Their PRs were considered more of an unpleasant intrusion into other peoples sprints. They became useless because of the terrible organization! Total wasted talent because "performance" was an afterthought in every respect.


Nope!

It's just that in corporate world, developers rarely do what they think is relevant for them.

The people who did Windows terminal probably cared a lot about COM or WinAPI or maybe CS, but I think it's fair to imagine how they were not fans of text-oriented interfaces. It just did not occur that they should be extracting last bit of performance here. People can type just that fast, after all.


TBF we've actually maintained the Windows Console for the better part of 5 years and we're incredibly passionate about performance. We live and breath in the commandline. Performance has always been one of our P1 priorities, because improvements to the console have second-order impacts on the entire commandline ecosystem.

There is an engineering tradeoff though - we've only got so many devs and at a certain point you have to say "good enough for now, we'll come back to this, let's go do this other thing for now". The Terminal is faster than conhost was, and that's a good place to start, but I don't think anyone things we're done working on the Terminal, and we're certainly not done working on its perf ;)


Yeah, that’s another problem, the software industry as a whole is horrendously bad at getting requirements right.


Windows terminal is a shit show. The dev blogs posts are all about useless features such as image background while fundamentals like unicode input are still broken.


My mouse would start lagging whenever I “moused” over an active Windows terminal window, in spite of running a brand new system with a 3070 GPU etc. Did they ever manage to fix that?



Like Windows Notepad. I suspect it was never designed for anything more than trivial cases, and then it never got requirements overhaul later in subsequent versions of Windows.


I was under the impression that Windows Terminal is a recent greenfield project.


Yea. Microsoft has two terminals. Command Prompt (cmd.exe) which is the default that comes with Windows, has been around forever and basically never seen a significant update. And the new Windows Terminal which is the recent greenfield project that only works on recent versions on Windows 10 and doesn't ship with Windows by default (yet).

Unfortunately people often get the two confused when talking about "the windows terminal".


And see, even you've done it here. Command Prompt is `cmd.exe`, which is different from PowerShell (`powershell.exe`), and both of those are different still from `conhost.exe`, which is the actual console window itself. cmd and powershell are "shells", which both need to run in a console, and conhost.exe is that console (which is the "terminal window" for these applications). More reading: https://github.com/microsoft/terminal/blob/main/doc/Niksa.md...

Conhost is the window that's had updates all throughout Windows 10. It's responsible for both the console subsystem, and the console UI. The UI though is the part that's largely being replaced by the Terminal. Conhost will never go away, but the Terminal is where all the action is these days.

(It's also shipping with Windows 11 by default)


Corporate practices are to blame here also, where instead of improving the existing solution, a new project is launched, which often carries nothing from the previous one.

Then the public will conflate two products, and some will avoid the new one not knowing if it's here to stay (another corporate antipattern).


Actually, the Windows Terminal shares a lot of its codebase with the Windows Console. The buffer, the VT parser, the renderer interface, most of the UIA implementation - that's all the same code. Improvements here to the Terminal actually help conhost.exe as well.

The UI of the Terminal is what's new, because trying to iterate on the legacy UI of conhost.exe was simply not maintainable.


I think a significant contributing factor is that management is a local maximum.

I am a self-identified "sort of knows what they are doing" (albeit, sometimes insecure about that) professional programmer. I've also had the chance to take a leadership position. So, I think I have a useful perspective here.

The amount "I" can accomplish in a leadership position is so much higher than by coding directly. So, on the margins, it's really hard to justify time spent coding, compared to coordinating others. It's incredibly easy to imagine stagnating as a coder due to not doing very much of it. Yet, clearly when everyone follows this route, software quality suffers. I don't have any particular solutions.

Perhaps eventually the rate of expansion of programmers will slow, and we'll end up with too many sort of knows what they're doing" programmers for everyone to move to management, so people/companies will start waiting until programmers hit the experienced mark to make the leap.


Maybe everyone needs their ansi.sys moment though. Back in the early 1990's, I, and probably half the programmers using DOS, rewrote the dos ansi.sys driver/TSR because its was _SLOW_. mostly IIRC because it was calling the BIOS text output, but also because it was a pretty solid implementation of the ANSI terminal/esc sequence "standard".

So, the first half took about an hour or two to handle output (just write a character at the right location in the framebuffer) and get the INT/TSR boilerplate right. Then for the next 20 hours fight with all the edge cases the cropped up with corrupted text in various TUI programs that were calling the DOS string functions. Then as the size+perf approached NANSI.sys (http://www.kegel.com/nansi/) give up...

Learning experience.


> Based on my limited anecdotal evidence, it seems to me that the software industry doesn’t value technical excellence as much as it values having warm bodies report to you. This in turn limits the amount of experience an average programmer working on these big projects will have, just because they are incentivized to leave the hands on implementation work for some sort of a leadership role as soon as possible.

+10000. This is so true. It's not just that people are incentivized to leave hands-on work. Every level of the management genuinely believe that a senior engineer should not spend much time on hands-on work but instead should focus on "tech leadership", which in a big company means going to meetings, influencing other teams, writing white papers, and of course "coordinating".


From the linked article itself, I can attest to the constant push to move up into 'leadership' roles, particularly in some companies. I have stayed and worked as an engineer for 37 years. I have been 'team lead' a half dozen times, but never a full manager role. Why? Because I would be a poor manager. I have personally experienced managers that where technically brilliant, but not good at managing (or inspiring) the technical staff. They should have remained as engineers/scientists. Some corporations used to have parallel paths for technical staff that had management level pay and benefits, but the suits in the 'C suite' have eliminated that for the most part over the last twenty years. Bell Laboratories (now a shell of its former self) was once one of those places.

So in the current environment, there is constant pressure to move up or move out. Even startups are no longer immune to that. If you want to have enough shares for it to really matter in a liquidity event, you need to have a team of people below you, or at least a director level title. The startup environment in 2000 was completely different, and technically astute engineers (mechanical, electrical, software) could attract strong stock grants as engineers. That has changed. I am fortunate in that after 37 years of doing this, I am at a position where I don't even have to think about these issues any longer. I feel sorry for younger engineers breaking into the field, the management pyramid is such a PITA.

And back to the topic of strong technical engineers that are poor managers? If you get stuck into one of those situations, constant micro management, constant downward pressure of "you don't know what you are doing", get out. Find a new position elsewhere. Good managers will coach you when you need help, but will let you do your own work. Good managers will help identify your deficiencies but also guide you to work that plays to your strengths. Good managers will provide you opportunity for growth. I learned this early in my career, blessed with having an awesome manager. Since then, every job I have had, first thing I did was figure out which managers were good, and which ones were not, and made sure that I never had to work under a bad manager.

So you can make a career in engineering, but it has to be a choice, and you have to learn to manage your managers.

Also note, while I recognize that I would be a poor manager (for various reasons), mentoring is a different skill set. If you are a senior engineer, take every opportunity to mentor a starter in the field.


This seems like giving up. The article is right about the problems but seems temperamentally opposed to actually fixing them. Number of people under you is a terrible metric in many ways, but it is at least objective (ish) and hard to fake; so is years of experience. If you take the position that programming skill requires years of particular kinds of learning and there is no way to assess whether someone with x years on their CV has actually done y years of that kind of learning... well, maybe that's true, but what's the actionable takeaway from that?


I don't disagree. That said, the first step in solving a problem is to understand that there is one and what are its root causes.

EDIT: I think sometimes to goal of some prose is to be thought provoking, i.e. to allow us to find somebody else who perhaps will find a better solution if only they had been stimulated enough to care about a problem


When you interview a line manager and he boasts that he led a team of 100 engineers, do you question that he could have done the job with 10 better-picked engineers?

You get what you measure. If you reward bloat, you get train wrecks.


Hm... It's totally okay, to not be able to do it. Nobody can know everything. Doesn't mean you are in a "lower" category compared to your colleagues.

Okay, it's not a global problem (ie. "needs PHD") but just a personal problem (ie. "I have no clue how to solve it"). Trying to get as much stuff done as possible, I've said similar things in the past, though. Not really thought about it before typing. Happens...

Nobody got hurt in the process, right? Let's forgive this little misinterpretation and move on.


Nobody here seems to be focused on the specifics of the case. But i think moving on would leave a great opportunity to re-evaluate corporate SE choices.

At the very least we can agree its a good thing that the dev was able to fork/re-implement the tool with the given API.

But for me it raises the larger question. How useful is this form of software engineering management?

The tool is relativity simple enough to re-implement ( Which is a good thing ), but at the same time an entire team/governance structure was put in place to act out a set of motions, without pushing what is possible.

To be blunt, it sounds miserable to have to work like that.


> the software industry doesn’t value technical excellence as much as it values having warm bodies report to you.

One of those can easily and objectively be measured, and the other one can't.

Various orgs have tried to measure financial "value", imperfect in and of itself, but then face serious problems attributing that to individual developers.

Fundamentally, discussing "good" software is performing subjective evaluations against subjective criteria. There's no easy route out of that.


The problem with the excuse parade is, that it has to justify itself. They have to be special, the avant-garde, selected through countless interviews to live well inside the castle. If some wandering apprentice can outperform them, the hot air escape from the "rockstar" developer, wrestling with complex code sitting inside the compound of a FAMG firm.

Then nothing remains, but embarrassed silence. And the question, why this people are paid for, if they waste the customers money, aka performance of the customers computational platform and the outcome is so sub-par. Which is why they are sitting all in one huge meeting today.


I work as a mining engineer and in my eyes these stages are effectively the same as the author describes in the software world. I'd even argue they apply to most "professional" careers.


I remember a post Msoft Dev wrote about adding a fairly simple feature to a web api. The code itself could be written by a single dev in a day or two, but the logistics needed for testing, documenting, supporting and communicating this involved hundreds of people with thousands of man hours. It also would bind them to maintaining this feature almost indefinitely. Because of that last point alone it took months to years to get a signoff on features.


Most of the industry seems structured to keep devs at an intermediate level: resume-driven hiring, ageism, proliferation of open source that prioritizes easy over simple, little discussion of software design, complete lack of education/resources to move devs past that, and incentives that push people into management.

Another thing I realized when writing this: where do experienced devs congregate? (It isn't Twitter.)


> A few hours later another programmer came up with the prototype of a much faster terminal renderer, proving that for an experienced programmer a terminal renderer is a fun weekend project and far away from being a multiyear long research undertaking.

I read through the Github issue and can't find the prototype mentioned here. Can anyone link the PR/code if they know it please?



FWIW he wrote it on the same day @Microsoft locked the issue to prevent further posts by non-Microsoft people.


I was curious as well, I dug a bit and I found this:

https://github.com/cmuratori/refterm

I think he just went ahead and created a completely different terminal.

Also, some background, the original raiser of the PR is Casey Muratori, of Handmade Hero fame:

https://www.youtube.com/channel/UCaTznQhurW5AaiYPbhEA-KA

https://github.com/cmuratori/

https://caseymuratori.com/


Whatever the comments from both Microsoft and Casey Muratori about particular details of their implementations, Windows lacks a terminal like Linux Kitty (there's an unrelated Kitty that's just a fork of PuTTY).

And by that I mean speed and features. Kitty is fast and a joy to use, and it is the software I miss the most when I am using Windows.


I noticed that when you propose a new idea, often there is no shortage of resident "experts" giving you hand-wavy reasons as to why it can't work, won't work, etc. I kind of feel like you DO need a PhD just to have enough clout for these kinds of people to trust you.



My company hired a number of PhDs. They were all smart and well educated but for some reason couldn’t turn that knowledge into working software. So they were all fired in less that a year.


This story should be pinned somewhere for all to see.


Other example of slow performance in MS is file system access and process creation.


fwiw, file system access is kind of a bad example - it's slow because the design allows people to hook into it. once you've made promises to end users that limits the kinds of optimizations you can do and how far you can go with those optimizations.

For one example, if you were to make file I/O bypass antivirus software in 'performance sensitive cases', few enterprise admins would accept that. That performance optimization would be disabled in a millisecond. You can do directory exclusions, but some of the I/O still has to go through the filter to be checked against the exclusion list.


i swear half of this is sunken cost fallacy of cs educations


Which half?


Nice ending by the way.


There are a couple claims/assumptions in the article that feel weird to me.

1. The article derides those using "years of experience as some sort of a badge of knowledge", yet keeps on referring to "experienced" programmers as if they were better. Maybe it's terminology, but in my experience (no pun intended) there's no useful correlation between the time one is writing software and their ability to tackle hard problems. "Experience" seem to matter much less than talent, imagination and passion.

2. Article says "Sitting in an office and shuffling papers around for a decade doesn’t make you a master programmer. Writing software does." -- writing software for a decade really doesn't automatically make somebody a master programmer. The author states elsewhere in the article: "I believe your self interest and curiosity matter more than the years spent writing getters and setters". So, which one is true? As mentioned in the previous point, I tend to think the latter.

3. The article mentions "the vast majority of software is written by the ones who are still learning the ropes" as if it were a bad thing. It really isn't, in fact this is the only way people can get "experience" in a growing industry where junior devs outnumber senior devs simply due to how large the industry has grown over the past ~40 years. It has everything to do with exponential growth in the software industry and has nothing to do with programmers going into middle management and leadership roles.

4. To make sure software is competently written even if most of them are written by junior devs, the managers/leads have to make sure everybody is assigned tasks that they can handle, perhaps with supervision by a more senior/skilled dev. The job for the tech lead is to figure out which is which, and ensure that the critical parts are competently dealt with by appropriately skilled people. This is a skill that requires competence and experience. Unless you have an endless supply of senior devs, you definitely want to "distract" the senior developers from their implementation work and put them in charge of this work.

5. Article says "the experienced programmers will take care of the design problems, but that is not going to save your bacon if the implementation of a great design is terrible." -- this is of course true if you have a team of truly terrible programmers, but not necessarily true if they are just "mediocre". Design decisions include choice of language and architecture (and sometimes process), and some demand more of the developers than others. There are ways to isolate bad implementation with clean(ish) interfaces, so that "big ball of complicated code" becomes small balls of bad code that can be fixed up bit by bit.

6. This is one of the most confused claims in the article: "If you want to produce a high quality software, you also need enough experienced people in the trenches that are not merely delegating the orders from the ivory tower." The only way this can work is if you only hire "experienced" people and reject all junior devs. Because in the author's ideal world there's really no place for junior devs. They can't be mentored because that will distract "experienced programmers" from the real work in the trenches. They can't be groomed for management roles because then they can't become "experienced programmers". It really sounds like to me that the author doesn't really know how to properly manage junior devs in a team and simply assumes that if "experienced programmers" who have kept writing software in the trenches were able to keep writing software, then the result would automagically not suck. I don't think the real world works that way, not until the software industry collapses and only the best programmers are able to keep being employed.

In all it feels like the author was trying to sound insightful by stringing together a story that, at best, is just pointing out that "experienced programmers" are better than "wet behind the ears" without providing a suggested solution, and at worst, making up superficial narratives that doesn't really hold up when you try to poke it.

Sure, for every senior dev taking on management or mentoring duties, the team loses out some quality man-hours on coding, but that is presumably a positive trade overall, or people would stop doing that. The only way to keep "experienced programmers" focused on coding are: don't hire junior devs, don't mentor junior devs and don't let technical staff enter management. I'm pretty sure that's not what the author tried to say, and unless I'm missing some obvious solution pointed out somewhere, the whole point of the article seems to be spinning a flamewar from github into a sentimental narrative that doesn't lead to any conclusions..


Such an interesting case. We need to encourage respectfully disagreeable people like the developer who raised this issue. A less disagreeable person would have stopped pushing after receiving the GitHub feedback regarding how complex the problem was.


tldr; common sense.


It's a shame that engineers commit their time for free for the benefit of multi billion corporation. Why do they do it? These corporations are not your friends.



Is there an aggregator for “coding drama”? Because this was all very fun to see.


There's one for github drama, but it's not limited to coding.


Puzzles me. It’s ironic you give stuff away as “free software” for “the masses”, you are just helping the Pandora papers mob get richer. Unless that software is genuinely disruptive. Tor, BitTorrent for example. But that’s more rate. Memcached, Redis being free just helps corporations really.


Maybe because they are working with people, and treat them as people. After all corporations are made up of human beings.


On the one hand you have idealists who think the most important thing in the world is getting the last bit of performance out of every program. On the other side you have businesses who don't care too much about performance of their applications (especially when no money can be made out of it) as long as 80% of use cases are covered.

Whichever side you sympathize with, it's pretty arrogant to think the other side is completely wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: