Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for saying this out loud. I’m a solo dev and in my project I’m doing exactly this: 90% black box integration tests and 10% unit tests for edge cases I cannot trigger otherwise. It buys me precious time to not adjust tests after refactoring. Yet it made me feel like a heretic: everyone knows the testing pyramid and it comes from Google so I must be very wrong.



You might be interested in the ‘testing trophy’ as an alternative to the traditional pyramid.

https://kentcdodds.com/blog/write-tests


This advice is so misguided that I'm concerned for our industry it's getting so much traction.

> You really want to avoid testing implementation details because it doesn't give you very much confidence that your application is working and it slows you down when refactoring. You should very rarely have to change tests when you refactor code.

Unit tests don't need to test implementation details. You could just as well make that mistake with integration or E2E tests. Black box testing is a good practice at all layers.

What unit tests do is confirm that the smallest pieces of the system work as expected in isolation. Yes, you should also test them in combination with each other, but it serves you no good if you get a green integration test, when it's likely only testing a small fraction of the functionality of the units themselves.

This whole "unit tests slow you down" mentality is incredibly toxic. You know what genuinely slows me down? A suite with hundreds of integration tests, each taking several seconds to run, and depend on external systems. But hey, testcontainers to the rescue, right?

Tests shouldn't be a chore, but an integral part of software development. These days I suppose we can offload some of that work to AI, but even that should be done very carefully to ensure that the code is high quality and actually tests what we need.

Test code is as important as application code. It's lazy to think otherwise.


If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

Whenever you change a method's parameters in one of those internal classes you'll have unit tests breaking, even though you're just refactoring code.

Unit testing at the smallest piece level calcifies the codebase by making refactors much more costly.


> If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

No, there's nothing definite about that.

The "unit" itself is a matter of perspective. Tests should be written from the perspective of the API user in case of the smallest units like classes and some integration tests, and from the perspective of the end user in case of E2E tests. "Implementation details" refers to any functionality that's not visible to the user, which exists at all levels of testing. Not writing tests that rely on those details means that the test is less brittle, since all it cares about is the external interface. _This_ gives you the freedom to refactor how the unit itself works however you want.

But, if you change the _external_ interface, then, yes, you will have to update your tests. If that involves a method signature change, then hopefully you have IDE tools to help you update all calling sites, which includes application code as well. Nowadays with AI assistants, this type of mechanical change is easy to automate.

If you avoid testing classes, that means that you're choosing to ignore your API users, which very likely is yourself. That seems like a poor decision to make.


Congrats, you understand what "unit test" was originally supposed to refer to. This is not what it's commonly meant to most people for years. The common meaning is "test every individual function in isolation".

I think this came about because of people copying the surface appearance of examples (syntactic units, functions) and not understanding what the example was trying to show (semantic units), then this simplification got repeated over and over until the original meaning was lost.


> If by "smallest pieces of the system" you mean something like individual classes then you are definitely testing implementation details.

If your classes properly specify access modifiers, then no, you're not testing implementation details. You're testing the public interface. If you think you're testing implementation details, you probably have your access modifiers wrong in the class.


If I change something at the lowest level in my well abstracted system, only the unit tests for that component will fail, as the tests that ‘use’ that component mock the dependency. As long as the interface between components doesn’t change, you can refactor as much as you want.


I prefer having the freedom to change the interface between my components without then having to update large numbers of mocked tests.


Sure, that's a tradeoff that you make. Personally I update my implementations more often than I update the interfaces, so I'm happy to take that hit when modifying the interface in trade for knowing exactly where my implementations break.


in a perfect world each unit would do the obvious thing without many different paths throught it. The only paths would be the paths, that are actually relevant for the function. In such a perfect world, the integration test could trigger most (all?) paths through the unit and separate unit-tests would not add value.

In this scenario unit tests would not add value over integration tests when looking for the existence of errors.

But: In a bigger project you don't only want to know "if" there is a problem, but also "where". And this is where the value of unit tests comes in. Also you can map requirements to unit tests, which also has some value (in some projects at least)

edit: now that I think about it, you can also map requirements to e2e tests. That would probably even work much better than mapping them to unit-tests would.


> in a perfect world each unit would do the obvious thing without many different paths throught it.

I don't think that's realistic, even in an imaginary perfect world.

Even a single pure function can have complex logic inside it, which changes the output in subtle ways. You need to test all of its code paths to ensure that it works as expected.

> In such a perfect world, the integration test could trigger most (all?) paths through the unit and separate unit-tests would not add value.

This is also highly unlikely, if not impossible. There is often no way for a high-level integration test to trigger all code paths of _all_ underlying units. This behavior would only be exposed at the lower unit level. These are entirely different public interfaces.

Even if such integration tests would be possible, there would have to be so many of them that it would make maintaining and running the entire test suite practically unbearable. The reason we're able and should test all code paths is precisely because unit tests are much quicker to write and run. They're short, don't require complex setup, and can run independenly from every other unit.

> But: In a bigger project you don't only want to know "if" there is a problem, but also "where". And this is where the value of unit tests comes in.

Not just in a "bigger" project; you want to know that in _any_ project, preferably as soon as possible, without any troubleshooting steps. Elsewhere in the thread people were suggesting bisecting or using a debugger for this. This seems ludicrous to me when unit tests should answer that question immediately.

> Also you can map requirements to unit tests, which also has some value (in some projects at least)

Of course. Requirements from the perspective of the API user.

> now that I think about it, you can also map requirements to e2e tests.

Yes, you can, and should. But these are requirements of the _end_ user, not the API user.

> That would probably even work much better than mapping them to unit-tests would.

No, this is where the disconnect lies for me. One type of testing is not inherently "better" than other types. They all complement each other, and they ensure that the code works for every type of user (programmer, end user, etc.). Choosing to write less unit tests because you find them tedious to maintain is just being lazy, and finding excuses like integration tests bringing more "bang for your buck" or unit tests "slowing you down" is harmful to you and your colleagues' experience as maintainers, and ultimately to your end user when they run into some obscure bug your high-level tests didn't manage to catch.


> Even if such integration tests would be possible, there would have to be so many of them that it would make maintaining and running the entire test suite practically unbearable. The reason we're able and should test all code paths is precisely because unit tests are much quicker to write and run. They're short, don't require complex setup, and can run independenly from every other unit.

I think having a good architecture plays a big role here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: