Hacker News new | past | comments | ask | show | jobs | submit login
The Grug Brained Developer (2022) (grugbrain.dev)
490 points by simonsarris on Oct 30, 2023 | hide | past | favorite | 192 comments



It's weird how smart people are naturally attracted to complexity like moth to a flame. It takes years to learn to fight the urge to over-engineer.

Once you learn to see it though, it's hard to ignore. Now I can tell instantly if code is over-engineered. Unfortunately It seems like maybe 99% of code is over-engineered. The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces. Even developers who appear to be totally robotic and ego-less are often guilty of over-engineering. It works on the subconscious mind. Few are ever able to escape the mindset because they are not fully conscious. They are not thinking about every single line of code that they write. They decide on some goal and then churn out whatever code first pops into their heads to get incrementally closer to that goal... Not realising that, at each step, there were many alternative paths that which were superior.


> The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces.

In my experience these are rarely, if ever, the reasons for over engineering.


My hobby projects are always over-engineered despite my best efforts. There's absolutely no monetary incentives here so anecdotally I agree with you.


Yup, but it goes both ways. I end up over-eningeering as some feeble attempt to avoid technical debt, only to realise my over-eningeering /is/ the technical debt. Or I end up with relatively simplistic / specialised code that needs an entire rewrite and migration process any time something is added. Either way, it's a bunch of rewritten code mixed with paralysing anxiety about writing bad code.


Get used to, and even good at rewriting. I enjoy it, and I realize that I may never have the best idea of what's appropriate at one point in time, but rather accumulate an approximation over time as I reshape the code to the best of my knowledge. And I develop tools and practices to aid me in refactoring faster and with more confidence. Most importantly, keep throwing yourself into it.


I think we as developers underestimate the value of a 100 line Python script. Everything could be hard coded and inflexible, but it's still easy to refactor because you can keep the whole thing in your head.

"Bad" design can be fine if it's kept simple.


How refreshing it is to read these very honest findings about oneself's abilities!


My tendency now is to aim for over-simplicity in hobby projects. I already have to deal with over-engineered garbage at work I have no choice but to accept, so I don't want to bring those headaches home.


You’re just practicing in your spare time, so your subconscious knows what to do when it gets to work on Monday. ;P


I also did a lot of hobby projects and open source work. Still, in my early career, I was over-engineering everything including my own hobby projects. I think I was trying to put my signature on the work and I unwittingly achieved this through unnecessary complexity.

I was thinking of good code as something I had to invent, but now I feel like it's more like something I have to discover.


Lock-in factor and billable hours can be motivations for contractors.

But, for contemporary software developers in general, I'd guess more often it's either resume-driven development (e.g., add the big complex new framework keyword to your resume) or not yet having enough experience to know what complexity is worthwhile for a situation.


In my experience most over-engineering can be explained by a lack of understanding. When we design it, we don't know what matters and our guesses are wrong; and when we modify it later, we don't have time to figure out how it really works. Both of those problems can be fixed by simply spending more time to understand, but time is money.


Not a lack of understanding, a misalignment of values.

I had a team once implement the MediatR pattern for a < 5k LoC codebase (I'm guessing at its size but it was made larger due to the overhead of the MediatR pattern).

When I asked them to remove it, it became a political fight that went to the VP because they were convinced that sort of flexibility was a good idea. Fast forward a year and they have a new technical leader and he thought I wanted that complexity until we had a conversation and when I mentioned I didn't like it he confessed to me he wanted to rip it all out.

People value the wrong thing too often.


> they were convinced that sort of flexibility was a good idea.

Believing flexibility is needed still suggests a lack of understanding.


not being absolutely perfect suggests a lack of understanding, but at that point we're engaged in a tautology.

Values are those things that help guide you in the face of imperfect information. Not having a crystal ball that can predict the future perfectly means you have imperfect information.


Perfection is not necessary for you to be confident that you have a reasonable understanding and can build something to match. It is possible the confidence will prove to be misguided, but you can deal with that later.

To prioritize flexibility means that one lacks the understanding required to even build a misguided confidence.


> It is possible the confidence will prove to be misguided

At which point your description of a lack of understanding applies, hence my comment.


That does not imply there is a lack of understanding in the moment. Understanding does not seek perfection towards future events. However, if you do not even understand what is known in the present, that is when you will start to lean on flexibility.


We can always beat a word into the shape we want it, but the phrase has a common meaning, I would suggest you use a different word.


I've often say "design for deletion": Most of those long-term "flexibility someday" needs are best-met by making sure the inflexible modules or flows can be clearly identified and ripped out for replacement.

This leads to a certain kind of decoupling, although with a higher tolerance for coupling that can kept in check by static analysis.


I think, related, is the hope that switching to a new framework will somehow solve everything annoying about the old stack, and let you undo past mistakes. In reality, though, if you're lucky enough not to end up supporting two stacks at once for a long time, you end up making lots of new mistakes again.


I never though so either, but then I worked at a place that had a stale product, product teams powerless and developers rejecting most features/writing random code all the time.

It took me a month to realise that the staff developers very much revelled in and protected their bad code and bizarre domain choices.

It was so far gone that there was no way to get rid of them and the product was just slowly dying and burning the remaining cash.

Then the mergers happened and they all got let go, only retaining the name for brand power and the entire stack was quietly moved over to another similar product which was rebranded.

Separately, there were absolutely very large consultancies that had a programming style/rules based on making their implementations difficult to read/modify, needing to call their COE back in to fix their code or add features - with it being very hard to modify. Talking entire codebase structured with ridiculous levels of abstraction and annoying code style. Bad integrations requiring their tooling to work and make sense of etc.

They target traditional orgs where the management just wants to get a project through and then bleed them over years.


>It took me a month to realize that the staff developers very much reveled in and protected their bad code and bizarre domain choices.

I think personality can account for this without any reference to incentives, which come in to explain how this personality problem can be so common among successful engineers.


Today's best post on complexity to me was the I accidentally saved my company half a million dollars, which is a story filled yes with a lot of poor developers but much much worse is a story filled with Conway's Law style lessons. Of madcap organizations & wild legacy systems that we live atop & typically just have to make do with. https://news.ycombinator.com/item?id=38069710 https://ludic.mataroa.blog/blog/i-accidentally-saved-half-a-...

So much of the grug-brained argumentation is same-side violence, is developers roasting developers. Assigning bad motives & declaring immoral bad people among us probably takes already same-side disdain & amplifies it, forments real & powerful hatred. Yes, various petty or criminal over engineering happens some, yes. But I usually think there are simpler Hanlon's Razor explanations that aren't even the individual's fault, are just the story of an org: an organizational body where so many constituent members of the body-whole have so little idea what others are doing or have done, and access to so few who can offer informed situational-appropriate wisdom for whatever dark end a squad finds itself tasked with.

In some ways, this has been the actual story of open source. We have not been end user facing. We have grow communities of practitioners with individual knowledge & experience that ports & can be compared & discussed with others, from outside our company. We get peership that companies can rarely afford or find after they grow mid-sized.

The Influx folks just talked about their 3.0, & replacing their custom built engine with Flight, DataFusion, Arrow, and Parquet. Maybe they might miss one box or two on neat nitty gritty optimizations (maybe this guy elsewhere in this thread, insisting on diy'ing binary encodings because protobuf doesn't have 16-bit ints can find some deficiencies, https://news.ycombinator.com/item?id=38078133), but the nice to haves of ecosystem integration seems like an intangible, but one that lets little good things out there plug in & help out, and that seems invaluable. https://news.ycombinator.com/item?id=38013714 https://www.influxdata.com/blog/flight-datafusion-arrow-parq...

The top comment to me epitomizes the worst of grug brainedness. It's smug & certain of itself, a strong & divisive Hanlon's violation of high degree. That kind of blue-on-blue attitude to me is far worse than 99% of the people who end up over engineering. Few organizations have real defense, lack coherency, to help themselves away from complexity, or they have such tight constraints that they overprune & disaffect those who do care and have good ideas. These are organizational issues, and having an org that can somehow see itself & what's afoot is the huge challenge. Stop hating on devs, please.


> Stop hating on devs, please

I don't think this makes sense. For example currently I have a big inclination to think bad about developers working for Vercel/recommending NextJs because they are invariably the same ppl and do it because of monetary benefits. The intentional over engineering they are adding to make a profit is insane and evil. There is no way I can make this point without "hating" the people doing it.


In my experience over-engineering also often doesn't have the result of locking in devs, although their billable hours may increase in the short term.


I struggle with this constantly. I think there are two problems:

1. I like interesting puzzles. A lot of code - especially commercial code - is pretty boring if you do it right. I find myself subconsciously pushing for features that will be fun to implement. And by "fun", I mean, features that will overcomplicate everything.

2. While I'm in the middle of programming something, all the choices that I make seem straightforward and necessary. Its only later when I step away from my code and then try to understand it with fresh eyes do I notice what a mess I've made of everything.

I also think that a lot of the time the most obvious solution to most problems is quite complex. It takes wisdom, skill and domain knowledge to know where to look for simple solutions to any given problem. Simple, clean solutions are rarely obvious.

"Oh, it looks like we're slowly implementing a halfbaked message queue. Lets use a standard message queue instead." "Oh, we're slowly building up a custom, buggy, binary framing protocol. Lets just use protobuf/msgpack". "How about instead of writing a custom RPC protocol to fetch data, we just use REST over HTTP. And then we can put nginx in the middle to cache our backend responses, and we can throw out our custom cache."


This is it. There's real skill in creating simple solutions to complex problems. Knowing the general landscape of what's out there, and easily available off the shelf, really does help.

Developers grow. It starts with simple code that doesn't work. The next step is, complicated code that solves the problem, in messy unmaintainable ways. The next step is writing super clean, almost boring code, that's highly readable and "dumb" and does exactly what it's supposed to do.

The other thing to realize, a lot of great code doesn't just spring from the developer's hands in its final form -- it's extensively edited and rewritten into its final, good, form.


> There's real skill in creating simple solutions to complex problems.

Not entirely facetiously, I think that, for engineers, there's real skill in creating simple solutions to simple problems—not, for example, finding the general instance of the problem and solving that, when the problem is unlikely to recur and crafting the perfect general solution delays delivery on what's actually in front of you.

(I know Perl's not fashionable any more, but I've always liked its design philosophy of "make easy things easy, and hard things possible." It seems like a slogan that can be adaptable to how to solve problems, though I'm not sure of the absolutely perfect analogue. Hmm, maybe I'm trying to solve the general instance of a problem ….)


I actually think this is the opposite of the case, for some definition of “generic”: the more generic problem has fewer possible solutions (there is only one pure function of type x=>x) so, if you hit on the right general problem to solve, your code will almost always be simpler. The problem is this is one of those “$1 to solve the problem/$99 to know which problem to solve” situations.


> I actually think this is the opposite of the case, for some definition of “generic”: the more generic problem has fewer possible solutions (there is only one pure function of type x=>x) so, if you hit on the right general problem to solve, your code will almost always be simpler. The problem is this is one of those “$1 to solve the problem/$99 to know which problem to solve” situations.

This is true, but I think also illustrates the phenomenon. It's tempting not to solve the specific problem in front of you, because a more general problem might be easier and more elegant to solve—and this mindset can easily lead one into, at worst, never solving the real problem; or, at less worst, solving a problem that's so far generalized that no one else looking at your code can tell why it's doing what it's doing.

(It can also happen that the more general problem doesn't have a simpler solution. If I want to print a string that has a few hard-coded values in it, formatted a particular way, I could develop a formatting spec and write a formatting library to process it, which is surely the right solution to the general problem—but, if the specific problem is likely only going to arise once, then it may be both easier to understand and a better use of time just to put in the hard-coded values.)


generic is being used in two different ways.

1. linked list is a generic data structure with a relatively simple interface

2. an application with 1k configuration values is generic in that it can handle everything, but is in no way simple.


> "Oh, we're slowly building up a custom, buggy, binary framing protocol. Lets just use protobuf/msgpack"

By using protobuf/msgpack you lose the ability to precisely control the layout and encoding of your data on the wire. Most applications don't care, but this results in your wire representation being defined by "whatever protobuf says".

Say I want to transmit an unsigned 16 bit integer with protobuf. How do I do that? The documentation doesn't include 16 bit integers as a datatype, so I'd probably have to wrap it in 32 bits and/or use some varint stuff. It would be simpler to just write a big endian 16 bit int though.

I wish there was a simpler alternative to protobuf that gives more control to end users and doesn't try to be smart. Until then, making your own binary protocol is not over-engineering.


It might be different if you have to talk to an ASIC that cannot understand protobuf, or send billions of values at line speed. But generally I don’t have to care anymore whether a number is sent in exactly sixteen bits, for the same reason I long since stopped caring about message framing or parity or run-length limits or 0-5 vs ±12 V busses. Expressing any of those constraints takes more effort than letting the machine use the commonly-supported default.

If I really wanted to squeeze out bloat, I’d try to use https://en.wikipedia.org/wiki/ASN.1#Example_encoded_in_PER_%... (which has a paid standard, but isn’t well known) before resorting to a completely ad hoc protocol.


It wouldn't be ad hoc per se, basically you would have a set of guidelines on how to transmit data and that by itself would be a standard.

Something like "use fixed length 8/16/32/64 bit signed/unsigned integers in big endian, length prefix can be 8/16/32 bits, bool is 1 byte (00 = false, 01 = true)" etc, without extra stuff like varints or bit packing, which a lot of current formats are doing.

In short, just use the most straightforward way of encoding while also using the least amount of data. Big endian for ints is very common, simple and relatively compact if you only use the bit width that you need.


I agree; sometimes writing your own binary format is the right call. To my point upthread, the trick is knowing when that’s the right choice and when it’s better to use protobuf or something standard. (Or, when to just stick to json).

Developing good instincts for this stuff takes a lifetime.


> 1. I like interesting puzzles. A lot of code - especially commercial code - is pretty boring if you do it right. I find myself subconsciously pushing for features that will be fun to implement. And by "fun", I mean, features that will overcomplicate everything.

That's what factorio is for!


Or Project Euler! :)

https://projecteuler.net/


> A lot of code - especially commercial code - is pretty boring if you do it right.

That is a very good point. If your code doesn't look boring, then you probably doing something wrong.


I'm reminded of that one quote from a letter of some author [paraphrasing, and I've seen it (mis-)attributed to Mark Twain and too many people to look the real quote up] "Apologies for the length, I did not have time to write a shorter letter".

EDIT: I ironically wrote way too long here. Grug say better:

> note, this good engineering advice but bad career advice: "yes" is magic word for more shiney rock and put in charge of large tribe of developer

People can write perfect, simple, DRY code if they have the time to and are incentivized to. In most cases you're rewarded for launching the thing and showing one's "technical prowess" with the amount of work / intellect / design skillz™ required to launch the thing. The natural conclusion of this is that everything becomes a bloated, over-engineered mess of kludge solutions that gets rewritten every 3-7 years.

I haven't seen any data on this but I'd guess the "rewrite half life" is correlated with turnover / average tenure, so even if people tried harder to write not-over-engineered code, it'd probably get rewritten anyway. As a perfectionist, this truly bothers me, but I find sometimes thinking harder / spending more time on the best _simple_ solution is rewarded less than building the complicated overengineered thing. I'm sure better organizations exist but I have yet to find one in 9 years as a SWE...

Open Source projects are actually the best counter-example to this that I can think of, but even then the best libraries sometimes get rewrites or new versions when they've changed hands from one maintainer to another, and I'll note that financial incentives are very different between open source and the types of enterprise-y cruft most people working full time as SWEs on HN probably see. It's like comparing a well crafted academic paper to a lazily written work email.


> People can write perfect, simple, DRY code if they have the time to and are incentivized to.

In my experience, people trying to make code DRY also wind up writing over-complicated patterns and abstractions to make it so.

I think a large amount of over-engineering is likely due to people applying patterns where they don't need to, or building unnecessary abstractions, or otherwise doing what they think is "good code".


Totally. Sometimes [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor) applies too -- often times the easiest thing to do is pile on without refactoring anything. And what makes sense in the scope of one PR doesn't necessarily make sense holistically over several years of changes.

Most folks aren't incentivized or simply don't bother to think through things and try harder. The "this way looks more smart so I made it all complicated" thing definitely happens too, but in eng orgs like I'm part of where we go through design reviews to try and cull that sort of thinking, the other less-intentional version is still common enough to be a problem at scale.


>They are not thinking about every single line of code that they write

Honestly for me over-engineering is usually the result of the opposite. Thinking too much when writing and having preconceived ideas about what a codebase ought to look like.

It was Casey from Handmade Hero IIRC who called his style of programming "compression based", effectively just writing code and factoring out what belongs together incrementally. Abstracting things out as they repeat, not consciously by design. I've taken this up more and more as a way to program.


I've heard it described as WET. Write Everything Twice. As long as it's not a crazy amount of duplication or a really obvious refactor (especially if it leads to more readable code), writing something a second time will start to show a clear pattern and abstractions will naturally develop.

Some fellow devs seem to live creating big beastly complex abstract PatternFactoryClassBuilderGenerators for simple one off use cases which should be quite simple.

Having devs and PMs on board with adding estimations and spending the time actually doing that refactor on the second or third time you're following a pattern is the tricky bit. It pays dividends long term though as you maintain velocity.


Agreed. Over engineering happens when you overthink every line of code, not when you fail to think about every line.


> Now I can tell instantly if code is over-engineered. Unfortunately It seems like maybe 99% of code is over-engineered

Regardless of the learnings you had having merit, you don't see a problem with the mental model you've developed, if it's output is giving the same answer 99% of the time?

In itself it doesn't mean it's necessarily wrong, but since you're assessing the quality of other people's work I'd assume you have a heavy bias in there.


There's no point in going crazy trying to make the perfect code. It just needs to be good enough for its purpose. Usually it is.

The problem isn't usually the code, it's the people in charge not giving the right instructions, shifting the goal post, allowing feature creep, or the deadly sins of rewrites, large refactors, unrealistic deadlines.

Over-engineering is just fine in the real world. The problem is when that causes cost and deadline overruns. Those can be controlled for, even if the complexity can't. But finding a manager who knows how to manage software teams effectively and simultaneously keep his bosses from digging their own graves is even rarer than an engineer who writes simple code.


I wouldn't call those "smart people". They're not much beyond mediocre, but see overcomplicating things (which in many cases they will dogmatically explain away as being a "best practice") as a way to make it appear like they're smart.

The smartest are those who can make complex problems look simple, with simple solutions.


And they're never really appreciated, since those problems end up looking so simple in hindsight


Disagree. When you look at a "less is more" engineer next to a "I need to solve the general case with the perfect API and refactor the foobar" after 6-12 months you'll notice that the former has a clear pattern of delivering, and the latter... usually doesn't.


> Few are ever able to escape the mindset because they are not fully conscious.

No one can, and no one is. All lines of code are over-engineered, just some more so. No amount of thought will get you to the perfect solution; it's an unapproachable asymptote. We're all creatures of time and there's a limit to how many cycles we can afford to spend iterating in design space to try and hit the right note. (Not to say we can't get better with practice, though.)


> It's weird how smart people are naturally attracted to complexity like moth to a flame.

Hah, I just experienced this when I asked my brother for a simple comparison of about 7 criteria in a 3 column table, to include in a report.

He gave me 12 criteria not entirely from the data source I was looking for, and a cost calculator to boot.

On the other hand, over the course of a few weekends he coded up a beautiful baroque bastard of an excel macro that saved us thousands of man-hours over 5 years.


Overengineering is part ego, sure, part new technologies and boredom, sure.

The biggest factor I've found in overengineering is the lack of a long term roadmap. If you need to build a feature X, and you engineer the bare minimum and need to put in the same amount of hours to do X+1, your management is going to be upset that you're taking too long to ship. You already had it 80% (in terms of feature complete) of the way there, why is that extra 20% as hard as the first 80%? So the engineers build up scar tissue. If you have to handle a ton of cases that you don't understand, why not build a CaseHandlerFactory so you can add the next feature faster?

A clear roadmap of "this is what we want in 1 year" will help solve over engineering. Otherwise engineers are incentivized to make their code as configurable, modifiable, and extendable, as possible, regardless of cost or business need. Not to mention all the additional time trying to figure out "canonical" data models that will "future-proof" the applications interfaces. If you have to iterate quickly (which is not as common as agile folks wish), you need to build up a raport with leadership to help them understand that speed comes with trade offs: faster might mean 'more work to change later', while slower to market today might mean faster iteration down the line. These discussions ARE valuable for leadership, as sometimes they need a quick win because of a Q3 earnings hole or contract, and sometimes they are willing to make longer term investments.

You move too quickly, they call your solutions hacky, you move too slowly, they call it overengineered. Everyone needs to be on the same page of what the change is trying to do: win short term, win long term, or somewhere in the middle.


> Otherwise engineers are incentivized to make their code as configurable, modifiable, and extendable, as possible, regardless of cost or business need. Not to mention all the additional time trying to figure out "canonical" data models that will "future-proof" the applications interfaces.

I've never understood this, maybe I'm just a bad engineer. I mean, it sucks having to pivot, but all those configurable and extendable pieces take hours trying to get the design right. And you only end up using ~1% of them, and then something you didn't foresee happens as well. I always end up spending months of my life saving days doing the change-overs.

And that's before we get into the problems with onboarding a new engineer (or being the one onboarded) into the kind of hellscape that the overly configurable application turns into.


> The biggest factor I've found in overengineering is the lack of a long term roadmap.

Can subscribe to this, 16 years career. Even a horizon of 3 months seems unattainable.

I call it a "management problem", but maybe that's because I'm not in management..


> The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces. Even developers who appear to be totally robotic and ego-less are often guilty of over-engineering. It works on the subconscious mind. Few are ever able to escape the mindset because they are not fully conscious.

I suspect this phenomenon explains a lot about how this industry has developed over the last few decades. Any significant software nowadays requires a team of baby sitters just to keep operating. Nothing is ever done and everything keeps changing for no good reason.


I'm not really a fan of the term "overengineered." I often find it is a direct translation of "Something I don't understand quickly."

In my experience (including my own development), overly complex designs accrete, as opposed to start off complex.

They usually seem to begin with "This is simple, let me just do this...", then, when we run into Roadblock A, we design in a mitigation, and so on...

Eventually, we have a ghastly chimera.

Other times, it comes from trying to coerce software written for one purpose, into another, and the glue code is kinda messy.

Also, there was an article mentioned here, about "Don't design a general-purpose framework."

I can concur with that. The app I'm releasing now, has a server component that is, in my opinion, way too complex. I designed it, and implemented it, so I get to say that.

The deal was that I originally developed it as a general-purpose framework. It has a layered architecture, and I did heavy-duty unit testing of each layer, as I was writing it.

It works very well, is fast, and secure.

But way too complex, as this app is its only implementation. It handles a lot of stuff this app never touches, like trees of user permissions. I have a very simple, rather "flat" permission structure, so a whole shitton of code never gets used. It was tested heavily, works well, but will never be used. I don't like having unused code paths, but I don't have the luxury of time to remove it (I have removed some, but there's plenty more, where that came from).

If I were to rewrite it (I won't -see "works very well", above), it would be much simpler.


If you claim that 99% of code is over-engineered, you better provide a good definition of over-engineering and best practices for not over-engineering. Because with a claim like this, I assume your model or definition of over-engineering is probably wrong.


I don't think it's lock-in or billable hours. I think "staying current and employable" is a much bigger influence on behaviour than either of those.

Also, I think most developers just don't want to do the same thing twice. And most developers really are writing the same software over and over again through their careers, with minor changes. So they need to change something to keep it interesting, and the only things they can easily change are technology and methodology.


> smart people are naturally attracted to complexity like moth to a flame

I think the general inclination here towards static typing is due to this, rather than any evidence that statically typed languages lead to higher quality software. Engineers just love puzzles. I'm also looking at you, Rustaceans...

runs for his life


I’ve seen some variation of this accusation being thrown around for years- and frequently by people who I regard as smart, capable developers. On the other hand, having worked with languages all over the spectrum of static typing- I’ve also seen first how how high the bar really is for benefiting from static types before you hit diminishing returns.

The best answer I can come up with is that people just seem to have differently wired brains. For me, static typing- even fairly sophisticated static typing, is simple. It makes the code simpler, easier to reason about, easier to refactor, and with a sufficiently expressive type system it lets you build things in a much more intuitive way than you could otherwise. It’s not about solving puzzles for the sake of them- types remove a big part of the puzzle by letting me explicitly write things down- and letting the compiler keep track of the details.

Certainly plenty of people don’t see it that way, and I’ve heard a lot of people make similar arguments about dynamic typing being simpler and more expressive. I don’t think they are lying but I see a big pile of inscrutable pain when I work in large dynamically types codebases.

I know I’m right about my experience, and I trust other people are right about theirs, so there must be some significant divide in how we conceptualize code that makes one persons elegant simplicity another’s intolerable complexity.


Nothing to add, except for a sincere appreciation for how you eloquently and impartially summed up the divide.


I don't agree with this - types do not imply abstractions, and simple types make simple code.

Every tool has the potential to be misused in accidental complexity.


I don't know, man — I see a lot of TypeScript developers who lean hard into generics (I am occasionally one of them).


When I've seen this (and found myself doing it) it's been because we're trying to do something with TS which we would have done easily in JS.

But the JS function we would have written would have required someone using it to read and understand it, and the TS function (without using 'any') needs to fully express what its inputs and outputs can look like.

Because of this TS actually tends to guide me towards writing more "Grug brained" code, because I refuse to use 'any' (and throw away TS benefits) and using generics usually requires a trip downstairs for a fresh cup of tea.


Because generics are actually a powerful tool for simplifying the data flow. They make it possible to promise not to do anything specific to and based on the data involved.


This is a very simplistic approach to simplicity. Simplicity is not just counting the number of characters you see. A function in a statically typed language may have a signature that says 'it takes an integer and returns an integer'. That is very simple. A function in a dynamically type language says 'this can take something, anything really, and returns something, anything, really. That is very complicated and unspecified even if it takes a few characters less to type.


I agree, but for complex type systems versus Java-style type systems.


Electronics can absorb superhuman levels of code complexity, without protest or guardrails.

At least STEM subjects are tested against repeatable natural phenomena or mathematic validation.

A CPU is indifferent to you complexifying yourself, ad infinitum. It didn't have to be like that.


I think sometimes I’m just trying to be lazy, ironically enough. Rather than just write a bunch of these classes, I’ll just make an abstract one… grug create many new problem to help solve problem


I sometimes over-engineer / over provision my own pet projects as a way of learning / exercising some technology that's new to me.


I’m Danish so my opinion on this will be coloured by the fact that most developers here have a form of formal CS education. But we teach people to overthink and abstract things. So so think that it’s only natural that people are actually going to do exactly what they’ve been taught.

I have a side-gig as an external examiner for CS students, and well, a lot of the stuff we grade our students on are things that I’ve had to “unlearn” myself throughout my career, because usually complexity and abstraction aren’t going to work out over a long period of time for an IT system. This obviously isn’t something that’s universally true. I do tend to work in non-tech enterprise organisations (or startups transitioning into enterprise) and in this space a lot of what is considered good computer science just doesn’t work. It’s everything from AGILE, where you project processes will trip you over as you try to juggle developers who both need to build and maintain things at the same time. To how we try to handle complexity with abstractions, and how those abstractions sometimes lead to “generic” functions where you need to tell a function 9001 different things before it knows how to perform “because it just evolved”. It’s in everything really, like, we teach students to decouple their architecture and it’s absolutely a good thing to teach CS students, but the result is that a lot of them “overengineer” their architecture so that you can easily swap which database your system is using (and similar) in a world where I’ve never actually seen anyone do that. Anecdotal, sure, but I did work in the public sector where we bought more than 300 different systems from basically every professional supplier in our country, and you’re frankly far more likely to simply replace the entire system than just parts of it.

But how are you going to know that when all you’ve been taught is the “academic” approach to computer science by teachers who might have never experienced the real world? Being self-taught isn’t really going to help you either. I can’t imagine how you would even begin to learn good practices in the ocean of “how to” tutorials and books which are essentially little more than the official documentation on a language and maybe a couple of frameworks.

> The developer's incentive to maximize their own lock-in factor and billable hours are powerful forces

This part, however, I disagree with. Again this is very likely coloured by the fact that I’ve mainly worked in the space where developers both build and maintain multiple things, often at the same time. But I’ve never met developers who wanted to do this. In fact I only meet developers who genuinely want their code to be as easily maintainable by others as possible because we all absolutely hate breaking off from actual development to fix a problem. That being said, I do think there is a natural risk of ending there accidentally if you haven’t “unlearned” a lot of the academic CS practices you’ve been taught. Especially because there is a very good chance you didn’t really “learn them right”.


I fight complexity by trying to be lazy. The lazy solution is often the simplest, so go for lazy.


I love this site, always get a laugh out of it. My absolute favorite:

Microservices

grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

seem very confusing to grug


I love this site and read it regularly. Everyone I know must be tired of the "software development manifesto" that is grug brain.

>complexity very, very bad

>given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex


I have a beef with the typing section:

> grug very like type systems make programming easier. for grug, type systems most value when grug hit dot on keyboard and list of things grug can do pop up magic. this 90% of value of type system or more to grug

Juniors at my job routinely ship code that breaks due to null access in production, Sentry tells me. During intensive development periods that's about 1 detected null-access bug per day per junior developer.

Using a proper type system with static checks would probably help immensely by pointing out "Hey, this can be null. You sure?" in their IDEs...

Also, you can have completion even without static typing.

> big brain type system shaman often say type correctness main point type system, but grug note some big brain type system shaman not often ship code. grug suppose code never shipped is correct, in some sense, but not really what grug mean when say correct

That's just rude and uncalled for.

I have shipped code mainly in in C, PHP, Python, Haskell and typed Python. The incidence of bugs that make it into production is much lower with typed languages. That's one reason to like it.

It also makes refactoring much, much easier. I can check whole code base for broken callers when I change something widely used and get reliable results in seconds. That helps immensely with iterating on a growing code base.


> That's just rude and uncalled for.

But is it wrong? Based on your stated experience, not even you seem to buy into type systems that force formal proofs – instead accepting lesser type systems that make tradeoffs between catching some problems (along with, most importantly, providing popup magic!) and not bogging you down in every little detail needed to prove total correctness.


> But is it wrong?

Yes, because people don't "not often ship code in statically typed languages". It's obviously wrong.


Is it? When was the last time you saw, say, Coq in the wild? If you ever saw it you can be certain the program is correct, but I certainly have never seen it. Most likely because anyone using it really is still trying to satisfy the type checker.

Clearly type systems live on a spectrum with varying degrees of sacrifices made, with each sacrifice compromising some ability to check for correctness, but giving back some other advantage, improved developer productivity being one possibility.

It is not a question of static typing or not – it is how complicated of a type system do you really need? Even Grug agrees that dynamic typing is not sufficiently complicated for most circumstances, but maybe a primitive static type system somewhere in the middle of the spectrum that helps catch the most grievous of errors (and provides popup magic!) with a sprinkling of automated testing to fill in the gaps where the type system falls short is good enough?


There is a difference between Coq and say Haskell, for example. I guess I will disprove your entire point by finding an unknown dynamic language that nobody uses then.


I think you're taking offense on behalf of Java when the paragraph is dragging Haskel and Rust.


I think that it might be misguided. Excessively prioritizing how quickly you ship code is probably the single quickest way to summon the complexity demon.


Or maybe naïve would be more accurate? I could see how strict typing might not be needed if the developers write simple code on their own. However, in practice strict typing is necessary to protect yourself from developers who are unwilling to put in the effort to write simple code.


> That's just rude and uncalled for.

It's just a joke. I'm almost certainly one of the people he's poking fun at and I don't think we should get bent out of shape. He obviously thinks that C or Java levels of static typing are fine/good, and thinks that fancier stuff like found in Rust and Scala is a waste of time. I think he's dead wrong, but that's life.


I'm confused. The original article is in favour of typed languages. Aren't you in agreement?


I think they differ about the why. The article claims that types are useful mainly because they allow sophisticated auto-completions. u/mordae thinks there are more important benefits re avoiding bugs in production.


In the link, Grug acknowledges the benefits of correctness established by types to an extent, but is highlighting the diminishing returns. After all, if one truly valued type system-enforced correctness, they would be writing code in something like Coq, not C and typed Python.


That sounds like a false binary to me. All language features have trade-offs and interactions with other features. It's not about "truly valuing" one thing above all else - your relative preferences depend on your experiences and what you value in the product.

Some people think type correctness is very important; perhaps they value minimising bugs over shipping fast. Other people think auto-complete suggestions is more important; perhaps they value shipping fast over minimising bugs.

Both positions could well-justified depending on the domain. In healthcare or military, the former makes sense. In gaming or web development, the latter makes sense.


> It's not about "truly valuing" one thing above all else

There is context here. With respect to the link, it is. It specifically refers to the "shamans" who promote that there is only one true type system way, and any "lesser" type systems are not sufficient.

C, typed Python, hell basically every language you are actually going to encounter in the typical programming project makes the very tradeoffs you mention because its probably not that important for the type system to perfectly express the constraints in order to validate correctness for your problems. That a primitive type system with some automated testing is good enough in the vast majority of cases. The idea, if you hadn't already figured it out before getting that far into the document, is to not overcomplicate things.


The linked article says:

> big brain type system shaman often say type correctness main point type system

This is a much weaker position than you're suggesting. Opining about the "main point" of type systems does not mean thinking that there is one true type system way.


> That's just rude and uncalled for

I think you're not the "shaman" he's referring to.

By "big brain type system shaman" I understood someone who doesn't necessarily write code, but instead, e.g. sells courses around the topic, getting people hyped about it, etc.

He uses the word "shaman" previously when talking about Agile, and there are many who fit that picture of working on selling courses, and not on developing software.


I've worked with a "big brain type system shaman" before. If anything, this is a very kind and charitable analysis. People who are obsessed with type hierarchies are _insufferable_ to work with, and their justifications often live in the land of "well what if"'s and "but that's not _sound_"'s. A lot of the enshittification that's induced by type systems comes from folks who believe that simply with enough types they will be able to completely prevent bugs, and that's just not true.


OK I reckon everyone is going to get on the HTMX wagon over the course of the next few months, and it's going to blow a ton of young minds and save a huge amount of global energy and make a lot of people very happy. And then these same inquisitive young people are going to click enough links on htmx.org that they stumble across hyperscript and it's gonna be like that moment in Dusk till Dawn where the vampires come out


everyone is going to get on the HTMX wagon

This is an infinite cycle with JS though. Someone tired of JS complexity writes a simple JS lib (SJSLib), SJSLib attracts people for simplicity, SJSLib grows complex because it has to support all the web things, someone tired of SJSLib complexity writes a simple JS lib...

I say this as someone who started with raw JS and fought IE5/6 for years, jQuery then saved us all, skipped over AngularJS because it was meh, React beta finally came around and loved it for FP ideals, now wade thru piles of transpiling / hot-reload / Typescript and React fat-libs. Have also written a few projects with both Intercooler (and now HTMX).

HTMX is overall solid, but this ain't the first wagon to come through town.


Obligatory Vanilla JS link: http://vanilla-js.com


Plain JS and HTMX are like Duct Tape and Spackle. I feel like I can fix/build anything I need with them.


HTMX is pretty much feature complete though.


To this comment’s replies: The dev of HTMX has said and is well aware that HTMX/hypermedia-based approaches don’t work well for every type of web app, but claims that it fits well with the vast majority of them, so the chance of it becoming bloated for want of fitting every single use case is pretty low. Greg says that a developer’s most valuable weapon against the complexity demon is limiting features by saying “no”, so I think the chance is pretty minimal.

So the cycle spoken of would be broken at this sentence. “SJSLib grows complex because it has to support all the web things.”


Famous last words.


Wars have been won and kingdoms lost in the space between “pretty much” and “complete”


Ah! Carson Gross is author of grug brained developer, and htmx!

I'd love to see some real critiques of htmx.

Personally I think the web's big ongoing challenge has been figuring out how to update the page well, and we keep trying all kinds of attempts. That causes fatigue, and some of the ideas are wild or grow crufty over time. Htmx seems like a back-reaction, to insist on grug brained less/YAGNI.

And even though the future is uncertain, I still see work on things like signals as saintly. There's a lot of different pokers in the fire, refining & trying different angles. We're still pioneering this front, other fronts.

The quest feels similar to so many others. MobX and Svelte and others have made great journeys already. That the journey is still irresolute, that we do keep tangling with the complexity beast: rather than that being a sign of weakness, as the conservative/grug minded might take it for, I think and I hope it's appropriate & due effort to reach a simple but good. Rather than a just simply simple approach.


Some critiques focus on the developer insecurities that make them vulnerable to HTMX's charms:

- Writing Vanilla JS ES6+ is not that hard, actually it is pretty fun.

- Toolchain for JS/TS/CSS was bad but is pretty good now. esbuild is simple and great.

- Dodging learning the above two things does not serve the dev.

Others have to do with foundational aspects of the project:

- HTMX is in denial of need to support native clients. IIRC, the official recommendation is that native clients should parse HTML for values.

- HTMX is still just a javascript library dependency with syntax that must be learned by whoever picks up the project.

IMO HTMX's buzz can largely be attributed to anti-frontend or anti modern frontend.

This thread describes this idea: https://twitter.com/DanaWoodman/status/1682075711266496512


Why would I write my own JS when "hx-get" does the job? Writing your own JS makes sense in some cases, but using HTML partials is also useful in many cases where you need to talk to the server anyways (since they e.g. prevent logic duplication). HTMX just makes it easy to work with them, as they're a single attribute away, rather than doing the request and insertion by hand.

Native clients are deliberately out of scope, but scraping HTML isn't the suggested solution. Rather, it is having a JSON API next to the hypermedia API: https://htmx.org/essays/splitting-your-apis/. This does have the downside of duplicate work, but does bring stability to the API without introducing something like GraphQL.

And yes, you will need to learn HTMX to use HTMX. But the surface of HTMX is generally quite small, and composes well with knowledge about HTML/CSS.


> Writing Vanilla JS ES6+ is not that hard, actually it is pretty fun.

i like vanilla es6 too. i've enjoyed using it with htmx, rather than using intercooler or alpine.


The best critique of HTMX is the same critique of Rails + partials + a bit of helper JS for inserts (the popular stack that predates modern SPAs.)

Say you’re writing a true web app. You’re making a mobile view, and the user has scrolled down and loaded some items (and partially inserted them as the scroll bar has moved down.) They add a card representing a few items to a cart, which 1. changes the look of the card and 2. the number that floats above the cart icon in the header.

Do you

1. Return the updated card and insert it as normal, then pass the HTML to figure out the number to change the cart icon with and call JS to change that icon?

2. Trigger a full page reload, meaning that both items update at the cost of losing the scroll position?

3. Break convention with HTMX and call a JSON API that will let you return the updates primitive values, and maintain the display logic changes in JS?

4. Have a weird hybrid JSON + HTML API that return multiple responses for each part of the DOM tree that needs changing and rely on some custom JS to do both updates?


https://htmx.org/examples/update-other-content/

If you need to modify the card and something else, probably using hx-swap-oob is the best approach to update the cart icon.


> And then these same inquisitive young people are going to click enough links on htmx.org that they stumble across hyperscript and it's gonna be like that moment in Dusk till Dawn where the vampires come out

I laughed at this, then I checked.

    _="on load wait 5s then transition opacity to 0 then remove me"
Oh no!

    _="on htmx:error(errorInfo) fetch /errors {method:'POST', body:{errorInfo:errorInfo} as JSON} "
Oh No!


it gets so much worse, trust me, i created it


Spoiler very much alert.


The Dimension Collector's Series DVD cover has the fanged face of a vampire woman on it.

If someone thinks it's ruined because they found out ahead of time that there are vampires in it... yeah, that's their problem.


If it's in the trailer it's fair game. Besides an actual spoilery description for From Dusk til Dawn would be all the non-vampire parts. The "twist" is that a movie billed as Robert Rodriguez vampire schlock actually has an entire other movie contained in it. That's a bit of a spoiler I suppose, my bad.


lmao so true


I've been a developer for 30 years, and I'll admit that in my early days, I was arrogant and thought I was smarter than everyone else. I'd describe myself back then as one of those "big-brains" loving all the complexity demons.

10 years later, and I've shifted more towards being a "Grug brain" developer. Now, I focus on the simplest solution that could possibly work, knowing that it's probably not perfect. But that's okay, because it gets me closer to what is correct, allowing for iteration.

The best thing you can do as a developer is to delete code! Right now, we have a requirement that we've been living with for two years that suddenly isn't a requirement anymore. I can't tell you how excited I am to go through and rip out a whole bunch of code, because it makes everything simpler.


There's some indescribable and pleasurable sensation in the back of my head when I delete code.

It's like I'm FEELING the space being freed.


Grug came up a week ago in a Philosophy of Software Design submission's comments. I thought the commentary was pretty good. https://news.ycombinator.com/item?id=38011938

> I feel the exact same about grug. I don't think people actually agree on what's simple, so it's pretentious to pretend your "simple" is the obvious one that a caveman would agree with.

Simple/simplicity is often one of the most complex things to discover. I see a lot of people resolute & certain that everything around them (except what they do) is complex & needs to be bent into simplicity. It feels dangerously weak & clutching after authority.


That’s a very good point. I feel, however, that the main idea is that bad philosophy is what ultimately fuels complexity.

The grug brain philosophy is simplicity at all costs, unless absolutely unavoidable.

The big brain philosophy as grug sees it is reusability at all costs, unless absolutely unavoidable.

The issue with this philosophy is that it tends to lock in first generation design choices and makes iteration more difficult.

It’s true that simplicity is difficult to find, but iteration is the key to finding simplicity.


> The issue with this philosophy is that it tends to lock in first generation design choices…

I agree.

A common approach to simple is “just start with the first thing that pops into your head and see how far you get.” I guess we could describe that as “simple to think of.”

A far less common approach to simple is “think through the whole problem, then remove everything that you don’t need.” This is what I’d call “a simple solution”, but note that it takes a lot more work to find it.


You've made one strawman accusation, and sure, people polarized that specific way (insisting only things worth doing are worth making reusable) are often doing bad things.

But I see plenty of anti-intectual anti-whatever attitudes founded in other forms of disdain. And I think grug reflects a broad/broader spectrum of negative biases.

One example, I've had an incredibly hard time getting folks to switch from dirty cobbled-together-with-StackOverflow shell scripts to Ansible, which is a just more sight-readable consistent experience. Or to zx or anything not just badly written shell scripts no one collaborates on.

People swear worse is better, are YAGNI (you ain't gonna need it) up the wazoo against 90% of everything. I think grug picks a lot of things to bash on, and a lot of devs do to.

I adore simplicity, but I find it challenging & long & arduous to hammer out of things. It's not fast to produce, or low thought. It's not made by resolutely sticking to lo-fi paths & mentalities.

We do need to be aware of the hazards at the other side, of ridiculous complexity & absurd/unnecessary systems engineering: yes! And I think grug offers some good almost-koans to reflect on, illuminates real hazards well. But I also think it's actively harmful & pernicious to put grug-brainedness on a pedestal, to go about actively disbelieving against any possibilities that might possibly be done simpler.

We seem to agree that simplicity takes iteration. Grug presents it as "oh I'm just dumb grug brain, I dunno" but in practice there is a lot of infighting in tech, and those who insist too loudly on aiming too high are not even IMHO as dangerous as those too loudly insisting on too low. We have to keep engaging possibility from all sides, & keep finding out what balances do work.


> One example, I've had an incredibly hard time getting folks to switch from dirty cobbled-together-with-StackOverflow shell scripts to Ansible, which is a just more sight-readable consistent experience.

Ansible can be used quite elegantly/simply, but the ecosystem as a whole is totally infested with "big brains" who insist on complicating things to make them more Reusable™.

There are really guys out there who will argue that your 20 line shell script actually needs to be two Ansible Roles each consisting of 10 files spread out among 2 layers of directories, and can be published to the Galaxy.

(In reality you can turn that 20 line shell script into a 30 line idempotent, single-file Ansible playbook, which is SOMETIMES worth it)


What I like so much about the Grug piece is that it’s full of nuance and humility. It’s not pretentious.


Grug express simple and clear.

Grug no make small problem big.

Grug humble.


measure LOC in my opinion. the local minima will move around based on the complexity of what you’re building - so there’s room for many reasonably intelligent opinions to be right. imperfect measure but directionally correct, works great on log scale


I've seen a lot of people hating on LOC as a measure of complexity, but I think there is a signal in the noise.

I think it does a pretty good job of identifying if you're fighting your tech/language or doing things the "intended" idiomatic way. The difference is usually at least an order of magnitude in LOC.


I've operated by and described the concept of Chesterton's Fence countless times, so that's a great name to learn. It's such a regular thing working with new grads, etc. that they see some "old legacy crap" and their first reaction is to want to tear it out or scrap the whole thing start over.

On some occasions it can be worth the lesson to let them try but it's a good thing to remember that the people who came before us weren't all complete idiots and there is generally a reason why they did what they did.

Sometimes it is gross old code that needs to be replaced, but even then it often still contains a hard fought record of all of the corners and edge cases you need to understand and handle to build anything in that domain.


I love the section on tests. It really is exactly what I've come to learn over the years. Integration tests are the sweet spot for finding bugs. Mocks tend to over complicate things (I still use them sometimes but I avoid using them systematically) and unit tests are too brittle in face of refactoring whereas integration tests help with refactoring.


Can't agree more. My favorite way to cheat with them is to have integration tests that follow demoing scenarios, so you can run them right before the demo (preferably twice)


Strongly disagree. Integration tests work brilliantly until a certain size or complexity is hit and then they become really bad. Unit tests are harder to write and maintain, but they will serve you much better in the long run because when they fail it’s much easier to understand and debug.

The worst sort of tests are integration tests which secretly depend on another integration test having run first, which will be true 99% of the time, until a change you make changes the order.


> The worst sort of tests are integration tests which secretly depend on another integration test having run first

That's an example of bad integration tests. Well engineered integrations tests don't do that.


It's a property of the code under test, not the tests themselves.

If the system is crappy/stateful/implicit, and you somehow manage to write nice/clean/stateless integration tests against it, I'd argue that the tests won't be close enough to the expected running of the system to tell you anything useful about it.


Nice and clean in the context of an integration doesn't mean no state it just means no state outside of the context of that test.

If I set up an integration test that sets up a database from scratch and tears it down and tests only the behavior of the app in that rigidly defined context, then yes, it will be useful. It will tell you how the code behaves in the scenario you've created.

Bad integration tests will share state with each other - e.g. by using the staging DB.


That's precisely what I meant by my comment. The 'cleaner' the integration test, the less it will behave like the real-world system.

> Bad integration tests will share state with each other

The real-world system shares state.

> If I set up an integration test that sets up a database from scratch and tears it down and tests only the behavior of the app in that rigidly defined context

... constructing a particular set of circumstances which will never occur in the real-world system.


>... constructing a particular set of circumstances which will never occur in the real-world system.

What do you find unlikely about a scenario where a test uses an app in a realistic way (e.g. with a browser) set up in a realistic context (e.g. with some fixed sample data) to reproduce a realistic scenario (e.g. a bug that already happened)?

I wouldn't say that isolation and realism are completely orthogonal but I find that well engineered integration tests are usually able to reproduce 90% of bugs sourced from production while unit tests can often manage only 10 or 15%. Bug in the SQL? Browser is involved? No can do.


> I wouldn't say that isolation and realism are completely orthogonal

Neither would I. I'm arguing that when you write a test method, you deliberately make the choice to include some kind of 'before-all' method, or not.

The reasons you would choose to include a 'before-all' method will vary from case to case. Let's say you're testing an addUser method. If you choose to isolate its state to avoid 'test flakiness', it is you making the call that addUser is flaky when run against shared state.

What is it about your application code that would make you think that addUser is flaky enough to need a clean slate to run against? Why not change the application code instead?


>The reasons you would choose to include a 'before-all' method will vary from case to case.

Not really. I would always purge anything that would cause tests to share state. I wouldn't do it on a case by case basis.

>What is it about your application code that would make you think that addUser is flaky enough to need a clean slate to run against?

The user already existing in the database? The behavior of the app would change in that case. Something has to wipe the db clean to test that scenario.

Thats why tests shouldn't share databases.


Unit tests have lower up front costs but higher ongoing costs. By their very nature they couple more to implementation details, so they will not give you clear confirmation when code is broken.

Integration tests can give unclear signals when they are flaky, but when they are engineered well they will give a much clearer signal that things work when they pass and that something is broken when they fail.

It's harder to engineer a good integration test - this includes making them isolated and independent e.g. of test ordering or indeed, anything else.


For what it's worth, the conclusion that I've come to with respect to tests is:

If a team has simple code, then tests can help a lot. However, if a team does not have simple code (and usually they don't) then it's better to spend time simplifying the code than writing tests.


I think a lot depends on what you think a "unit". If you use a leaf function as your unit I think that that's often much too low level but larger modules with relatively stable interfaces can make good units that are productive to test.


> Complexity very very bad.

> best weapon against complexity spirit demon is magic word: "no"

> sad but true: learn "yes" then learn blame other grugs when fail, ideal career advice

Complex wisdom from grug.


Discussed at the time:

The Grug Brained Developer - https://news.ycombinator.com/item?id=31840331 - June 2022 (374 comments)


Complexity still very bad.


Everyone's blub paradoxed about simple.

Anything below your chosen level of simplicity has no features.

Anything above is too complex.

You are at the true simplicity optimum. Your manager is the one who doesn't get it. Terrible guy. Understands nothing. Unlike you, true artist, pure simplicity.


The only reasonable comment here


Grug Inc. -- Fantastic! I really enjoyed reading this and feel like im guilty of unleashing the complexity spirit demon, even though im not even a big brain. Out of curiosity, are there any programming languages that naturally steer people away from complexity? but still "get the job done".


Of the languages I’ve used, Go comes closest. The language is small and boring, but it means I stay focused on the problem and hand instead of getting fancy.


There are two ways that a system can get unworkably complex. The most obvious is to overengineer and introduce too many excessively complex abstractions. However, it is equally harmful to become dogmatically obsessed with using the most "straightforward" implementation when more sophisticated approaches would make things easier to understand.

I have not used Go, but what I have heard makes it seem like it is designed in a way that would encourage the second approach.


That's the first thing anyone has ever said about Go that makes me want to learn the language. That sounds like an ideal programming language.


This mindset permeates through the entire ecosystem and tooling as well.

Enterprise projects build in single digit seconds, test suites fly by, project builds to a single binary (with embedded resources), most third party dependencies follow the established interfaces (which means plug and play) and so on.

It's the one language that I feel confident in, despite having worked in several other languages for many more years than Go.


I second this but I will say that the complexity demon lives in the reflect package.


It's a core tenet of Go

https://youtu.be/k9Zbuuo51go


This piece reflects some of the most frustrating professional interactions I’ve ever had where people insist that something is too complicated with no concrete suggestions for simplification.


There’s 2 reasons that can happen.

1. It’s not actually overcomplicated, but the people saying it is haven’t thought about it hard enough to realize this.

2. It is overcomplicated, but it’s such a tangle of complexity that fixing it would require the people pointing out the problems to basically do it over from scratch.

#2 is usually the result of a very experienced developer being overwhelmed by the amount of complexity coming out is the vastly larger amount of inexperienced developers around them. It’s much easier to add complexity than it is to fight it.


There’s also a third reason: a form of anti-intellectualism where you think that designs that are hard to derive are intrinsically more complex than just doing the straightforwardly obvious thing.


why is this ”anti-intellectualism”? intuition is a strong quality to have in your code.


What is intuitive is strongly dependent on what you have been taught. For example, if you have only been taught to use loops, then iterator functions like map and filter seem less intuitive. However, once you have learned them, iterator functions are dramatically more intuitive than loops.


"danger abstraction too high, big brain type system code become astral projection of platonic generic turing model of computation into code base"


Who wrote that? I want to know! I need to know! I need to follow him/her on Twitter, LinkedIn, GitHub, Whatever, ... I need to work with him/her, it's my doppelganger!



It's the author of htmx - very smart guy. :)


> limit damage of big brain developer early in project by giving them thing like UML diagram

Ah so this is the purpose of UML!


If you need nuanced behavior, you need a complex controller. And you always need more nuanced behavior, this is the inherent nature of software development. Sometimes the appearance of simplicity is achieved by making the behavior simpler (e.g. dropping the support for old versions, not implementing parts of the specification, and so on). This is degradation, not simplicity. The right simplicity is an art of having the required complexity yet somehow manage it internally. So complexity is not bad. It is given. What is deficient is our skill.


i don't think grug actually disagrees with you here but takes the position that skill both can't be counted on and doesn't scale.

And this matches my own dev experience cathedrals of complexity pale into comparison that code that's easy to throw away and rewrite to meet changing requirements or scope.


My experience has been that the powers that be generally won't give developers time to rewrite to meet changes in the requirements. Personally as I see it:

iterative development with time to rewrite > cathedral development > "iterative" development without time to rewrite

I think that most teams actually do "iterative development without time to rewrite" so cathedrals of complexity would actually be an improvement.


ok. so grug make good, good point. many good point.

now say others no listen and do opposite of what grug say for month after month. code complex. code very complex

what grug do now


ok. some need feel fire under ass to see house burning. grog only relax and wait. complexity spirit demon do the rest


grug sad. grug make complexity demon more power. grug look at code. complexity demon look back. grug raises club. grug factor. complexity demon no more. grug tired. grug sleep.


grug say now ok reach for club


A lot resonates with this, particularly factoring code and carving out barriers later on when the project has settled.

I believe there are some antipatterns like singletons and globals that on the surface look grug-brained but are actually complexity multipliers.


Saying "complexity bad" without giving a definition of complexity is not very meaningful. In reality it means "I'll use my personal judgement and anything I don't like will be marked as complexity".

Rich Hickey gave his own practical definition of complexity and followed it in his design, and sometimes the results are very unintuitive. For example, transducers in Clojure are actually simple (by his definition) because they de-couple transformation from the context. Also, by his definition HTMX approach (aka PJAX, aka HTML-over-the-wire) would be more complex then JSON + client side rendering, because it couples together multiple things: network, routing and rendering.


I read it in this voice.

https://youtu.be/v79fYnuVzdI?si=2iEdgEgx3Q-7RyI_

(Zathras Wrong Tool)


I enjoyed "black think juice".


Grug not put water on body every day


> but grug must to grug be true


BUS RAM CPU BUS RAM CPU


Incredible essay. Very funny and very wise.


>sad but true: learn "yes" then learn blame other grugs when fail, ideal career advice

grug speak only true


OK, this was better than expected.


While I agree mostly, you can overdo the "grug". For example it is possible to underengineer (underabstract?) a software for years until realizing that simple things are still complicated, and you forget to built good abstractions once the patterns have emerged. If you build abstractions, you need a way to correct them anyway, making breaking changes to their contract.


The point is that "underengineering" then having to tie up the abstractions when it's really needed is oftentimes (read: not always) better than overengineering and entering the domain of the complexity demon


I agree that the balance is way too often shifted in the way of overengineering, probably because of our field being skewed by "higher status" thinking.


very hard to read, but worth it


Easy read. This grug turn off brain. Only grok from page for this grug.


big brain is not use brain make brain read grug.


I asked chatgpt to fix it:

``` Intwoduction

this cowwection of thoughts on softwawe devewopment gathewed by gwug bwain devewopew

gwug bwain devewopew not so smawt, but gwug bwain devewopew pwogwam many wong yeaw and weawn some things awthough mostwy stiww confused

gwug bwain devewopew twy cowwect weawns into smaww, easiwy digestibwe and funny page, not onwy fow you, the young gwug, but awso fow him because as gwug bwain devewopew get owdew he fowget impowtant things, wike what had fow bweakfast or if put pants on

big bwained devewopews awe many, and some not expected to wike this, make souw face

THINK they awe big bwained devewopews many, many mowe, and mowe even definitewy pwobabwy maybe not wike this, many souw face (such is intewnet) ```


Some sentences are, but the whole thing is a pleasure to read, I really enjoyed it :)


Reminds me of classic FILM CRIT HULK columns. The content’s good but the style can take some getting used to.


He dropped that style a while ago, for that reason I think.


Grug took their muse too far. Had much hard time try parse meaning from grug.


This is just the best. I’m gonna force this on you ger colleagues.


Thankfully ChatGPT can translate this into readable English.


This is my favourite read this year I think :D


Nice to see this pop up again. Grug life!!


[flagged]


Nobody asked.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: