Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we should try and separate exploration from implementation. Some of the ugliest untestable code bases I have worked with have been the result of some one using exploratory research code for production. It's OK to use code to figure out what you need to build, but you should discard it and create the testable implementation that you need. If you do this, you won't be writing tests up front when exploring the solution space, but you will be when doing the final implementation.


Have you ever had to convince a non-technical boss or client that the exploratory MVP you wrote and showed to them working must be completely rewritten before going into production? I tried that once when I attempted to take us down the TDD route and let me tell you, that did not go over well.

People blame engineers for not writing tests or doing TDD when, if they did, they would likely be replaced with someone who can churn out code faster. It is rare, IME, to have culture where the measured and slow progress of TDD is an acceptable trade off.


Places where software is carrying a great deal of value tend to be more like that. That is, if mistakes can cost $20,000 / hour or so, then even the business will back down on the push now vs. be sure it works debate.

As always, the job of a paid software person is to merge what the product people want with what good software quality requires (and what power a future version will unleash). Implement valuable things in software in a way that makes the future of that software better and more powerful.


I've always favored exploration before implementation [1]. For me TDD has immense benefit when adding something well defined, or when fixing bugs. When it comes to building something from scratch i found it to get in the way of the iterative design process.

I would however be more amenable to e.g. Prototyping first, and then using that as a guide for TDD. Not sure if there is a name for that approach though. "spike" maybe?

[1] https://www.machow.ski/posts/galls-law-and-prototype-driven-...


I find that past a certain size, even exploratory code base benefits from having tests. Otherwise, as I'm hacking, I end up breaking existing functionality. Then I spend more time debugging trying to figure out what changed.. what's your experience when it comes to more than a few hundred lines of code?


Indeed, but once you start getting to that point I'd argue you are starting to get beyond a prototype. But you raise a good point, id say if the intention is to throw the code away (which you probably should) then if add as few tests as will allow you to make progress.


Most projects don’t have the budget to rewrite the code, once it is working.


Most project don't have the budget not to rewrite the code.


I think this is the reasonable approach I take. It's ok to explore and figure out the what. Once you know (or the business knows) then it's time to write a final spec and test coverage. In the end, the mantra should be "it's just code".


This makes sense, but I think many (most?) pipelines don't allow for much playtime because they are too rigid and top-down. At best you will convince somebody that a "research task" is needed, but even that is just another thing you have to get done in the same given time frame. Of course this is the fault of management, not of TDD.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: