Yep, notice there was no mention at all about why the original software was so ill-designed in the first place. Not even a curiosity as to why. Your conclusion is more valid, though I wouldn't necessarily place the blame on the PM. Agile/Scrum rituals, where blame is diffused and developers are forced to sprint quickly through poorly-designed tickets, yields poorly-designed software. Who could have guessed? Feels like a systematic problem with the "modern" bloated software organization.
Part of the task is to push engineers to understand the customer problems and work that way. Sometimes it's hard, when engineers are stubborn (I'm guilty of that too).
This PM eventually found the way to push their engs, as described in the article. So I think PM achieved the goal pretty good.
> Yep, notice there was no mention at all about why the original software was so ill-designed in the first place.
All software is ill designed in the first place. Even software I write to solve my own problems will usually do a poor job of solving that problem on the first iteration.
There is a reason old engineers say: "Plan to throw the first one away. You will anyway."
The root cause I think is that nobody really cares. They're not paid extra to care, either. The PMs are putting checkboxes together and writing reports for their managers without really asking how what they are designing is going to actually be used, the engineers are turning each checkbox into code without wondering if what they are doing makes sense, and the project managers are making sure the train is running on time without regard for where the train is actually going. At the end of the day, the company's stonk goes up, everyone gets paid, and goes home to the family they care about and to do hobbies they actually care about. If any of these characters in the play goes above and beyond to do something wonderful, they aren't getting paid more, the stonks aren't going up higher, and the effort is usually just wasted. I'm not saying this is bad, either, it's just part of why products are so bad.
My rule for discussing ANYTHING at work: Does this move the current project forward? Are we painting bike sheds and shaving yaks and beating around the bush? Or are we here to get shit done? Everything else is a waste of time, politics included. If you want to be friends after work, we do that after work where I'm more than happy to discuss my thoughts on the current government.
> tdd from the integration side and only add unit tests later
This is where I've landed as well. Unit tests are for locking down the interface, preventing regressions, and solidifying the contract - none of which are appropriate for early stages of feature development. Integration tests are almost always closer to the actual business requirements and can prove direct value - ie only once the integration works, then lock it down with unit tests.
I've also toyed with a more radical idea: don't check your tests into the git repo. Or at least keep the developer tests separate from the automated tests. Think about it: what rule says that the tests used in development should go directly into the CI test suite? One is designed to help you navigate the creative process, the other is designed to prevent regressions. I think we do a disservice by conflating the two scenarios into one "testing" umbrella. During TDD, I need far more flexibility to redefine the shape of the tests (maybe it requires manual setup or expert judgement or ...) and I don't want to be hampered by a (necessarily) rigid CI system. Dev tests vs CI serve two completely different purposes.
For me, dev testing is something that I use directly in the hot feedback loop of writing code. Typically, I'll run it after every change then manually inspect the output for quality assurance. it could be as simple as refreshing the browser or re-running a CLI tool and spot checking the output. Importantly, dev tests for me are not fully fleshed out - there are gaps in both the input and output specifications that preclude full automation (yet), which means my judgement is still in the loop.
No so with CI tests. Input and output are 100% specified and no manual intervention is even possible.
There are some problems where "correct" can never be well defined. Think of any feature that has aesthetic values implied. You can't just unit test the code, brush off your hands and toss garbage over the wall for QA or your customers to pick up!
I use this technique mainly to avoid an over-reliance on automated testing. I've seen far too many painful situations where the unit tests pass but the core functionality is utterly broken. It's like people don't even bother to run the damn program they're writing! Unacceptable and embarrassing - if encouraging ad-hoc tests and QA-up-front helps solve this, it's a huge win IMO.
It's easy to get lost in the geodesy details, which really matter at a local scale! But on a global scale the earth is effectively round.
Many of us have seen images like this: https://www.asu.cas.cz/~bezdek/vyzkum/rotating_3d_globe/figu... which are effective at showing the shape and gravitational anomalies but is wildly exaggerated on the height axis. Visualizing to scale would look indistinguishable from sphere.
It's used in documentation mostly to distinguish the command from stdout. And it's verbatim what you see if you copy-paste from most default bash shells.
I don't like the term "rot" - your software isn't rotting, it's exactly the same as when you last edited it. The rest of the software ecosystem didn't "rot" either, it evolved. And your old code didn't. "Extinction" seems a much better fit.
I've seen developers add a second ORM library as a dependency, not because the first didn't do the job but because they just "forgot" about the first one and wanted to use the new hotness. Developers, just like LLMs, have biases that taint the solution space.
The key is that we all have an intuitive sense that this behavior is wrong - building a project means working within the established patterns of that project, or at least being aware of them! Going off half-cocked and building a solution without considering the context is extremely bad form.
In the case of human developers, this can be fixed on the code review level, encouraging a culture of reading not just writing code. Without proper guardrails, they can create code that's dissonant with the existing project.
In the case of LLMs, the only recourse is context engineering. You need to make everything explicit. You need to teach the LLM all the patterns that matter. Their responses will always be probabilistic token salad, by definition. Without proper guardrails, it will create code that's dissonant with the existing project.
Either way, it's a question of subjective values. The patterns that are important need to be articulated, otherwise you get token salad randomly sampling the solution space.
To everyone with complaints about the new "Shortbread" styling, I agree that it's not perfect but that's kinda missing the point. The real story is that vector tiles have the styling applied client-side so anyone can tweak the look with a little javascript.
The prior raster tiles have the style baked in; if you want a new look, you need to generate a new image. So each map publisher ends up running their own data and server infrastructure just to tweak the style.
The vector tile approach means a single (cacheable) asset can be used in many different maps. Huge win. If you don't like the style, you can make your own without having to literally download the planet.
> Out-of-date documentation can be worse than no documentation
One solution to this is to write structured and testable documentation. Easier said than done, but if your docs get regularly integration/e2e tested against reality, they stand a much better shot at staying up to date. I always recommend moving the docs as close to the development work as possible - ie docs get checked into git alongside the code and make sure tests fail if anything changes.
Documenting every piece of logic just because, I believe, is a code smell. However, look at Rust's documentation (or public crates, at varying levels), all the documentation is extracted from the code, with internal links and even external links between crates thanks to cargo doc (which I think is many times better than doxygen and what the Java and C# guys use, and let's not talk about the nonexistent alternative for JS/TS). Code examples in Rust docs can be made to reference the actual code in the crate, and the code block can be marked as "it compiles", "it doesn't compile", etc. And the compiler guarantees that's true. A block that doesn't compile is used as an example of incorrect code.
In the Unity game engine, they use docfx, and I heard from one developer that their system is done in a way that the code examples in the documentation import the actual code and the documentation fails a build if the code doesn't compile, similar to Rust.
Huh, I’d be interested to sit down with someone like that and better understand their viewpoint. I wonder if it’s just comments that describe what the code is doing (“what” not “why”) that they have a problem with and are just throwing the baby out with the bath water.
I’ve written mini-blog posts in comments before to explain a complex system or an odd bit of code that is the result of a massive bug in production that I want to document and explain the reasoning behind it.
reply