Hacker News new | past | comments | ask | show | jobs | submit login

I have completely the opposite perspective.

Unit tests actually need to be correct, down to individual characters. Same goes with API calls. The API needs to actually exist.

Contrast that with "high level design, rough outlines". Those can be quite vague and hand-wavy. That's where these fuzzy LLMs shine.

That said, these LLM-based systems are great at writing "change detection" unit tests that offer ~zero value (or negative).




> That said, these LLM-based systems are great at writing "change detection" unit tests that offer ~zero value (or negative).

That’s not at all true in my experience. With minimal guidance they put out pretty sensible tests.


> With minimal guidance[, LLM-based systems] put out pretty sensible tests.

Yes and no. They get out all the initial annoying boilerplate of writing tests out of the way, and the tests end up being mostly decent on the surface, but I have to manually tweak the behavior and write most of the important parts myself, especially for non-trivial tricky scenarios.

However, I am not saying this as a point against LLMs. The fact that they are able to get a good chunk of the boring boilerplate parts of writing unit tests out of the way and let me focus on the actual logic of individual tests has been noticeably helpful to me, personally.

I only use LLMs for the very first initial phase of writing unit tests, with most of the work still being done by me. But that initial phase is the most annoying and boring part of the process for me. So even if I still spend 90% of the time writing code manually, I still am very glad for being able to get that initial boring part out of the way quickly, without wasting my mental effort cycles on it.


The fact that you think "change detection" tests offer zero value speaks volumes. Those may well be the most important use of unit tests. Getting the function correct in the first place isn't that hard for a senior developer, which is often why it's tempting to skip unit tests. But then you go refactor something and oops you broke it without realizing it, some boring obvious edge case, or the like.

These tests are also very time consuming to write, with lots of boilerplate that AI is very good at writing.


>The fact that you think "change detection" tests offer zero value speaks volumes.

But code should change. What shouldn't change, if business rules don't change, is APIs and contracts. And for that we have integration tests and end to end tests.



I think you've misunderstood what he meant by change detection (not GP, could be wrong).

Hard to describe, easy to spot.

Some people write tests that are tightly coupled to their particular implementation.

They might have tons of setup code in each test. So refactoring means each test needs extensive rewrites.

Or there will be loads of asserts that have little to do with the actual thing being tested.

These tests usually have negative value as your only real option as another developer is to simply delete them all and start again.

That's what I would interpret the GP as meaning when they use the phrase "change detection" tests.


>Some people write tests that are tightly coupled to their particular implementation.

That is not due to people choice but due to what actual code being tested does.

I think integration tests and end to end tests are much better.


>But then you go refactor something and oops you broke it without realizing it, some boring obvious edge case, or the like

I will start to care when integration tests are failing, because that is an actual bug. Then I will fix the bug and move over.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: