1) There are different types of tests, for different purposes. Devs should be writing some of them. Other types & forms of testing, I agree that this is not in many dev's sweet spot. In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.
2) Many, but far from all, QA people are just not skilled. It wasn't that long ago that most QA people were washed out devs. My experience has that while testing isn't in the sweet spot of many devs, that they've been better at it than the typical QA person
3) High quality QA people are worth their weight in gold.
4) Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do. Instead, QA groups should be partnering with dev teams to take up higher level and more advanced testing, helping devs to self-help with other types of testing, and other such tasks.
> Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do.
That's a perfectly legitimate thing to do, and doing grunt work is a perfectly legitimate job to have.
Elimination of QA jobs - as well as many other specialized white collar jobs in the office, from secretaries to finance clerks to internal graphics departments - is just false economy. The work itself doesn't disappear - but instead of being done efficiently and cheaply by dedicated specialists, it's dumped on everyone else, on top of their existing workloads. So now you have bunch of lower-skill busy-work distracting the high-paid people from doing the high-skill work they were hired for. But companies do this, because extra salaries are legible in the books, while heavy loss of productivity isn't (instead it's a "mysterious force", or a "cost disease").
The problem of handoffs makes this work far from cheap.
And tests are not dumb work. TDD uses them to establish clarity, helping people understand what they will deliver rather than running chaotic experiments.
Highly paid people should be able to figure out how to optimize and make code easy to change, rather than ignoring technical debt and making others pay for it.
QA is just postponing fixing the real problem - hard to change the code.
The best QA people I've worked with were effective before, during, and after implementation - they worked hand in hand with me both to shape features testably, work with me on the implementation for the harness for additional testing they wanted to do beyond what was useful for development, and followed up with assistance for finding and fixing bugs and using regression tests to prevent the category of error from happening again.
At the very least I want someone in QA doing end-to-end testing using e.g. a browser or a UI framework driver for non-web software, but there's so much more they do than that. In the same way I respect the work of frontend, backend, infrastructure, and security engineers, I think quality engineering is its own specialized field. I think we're all poorer for the fact that it's viewed as a dumping ground or "lesser"
Testing is probably my favorite topic in development and I kind of wish I could make it my "official" specialty but no way in hell am I taking a pay cut and joining the part of the org nobody listens to.
Not really. Unfortunately some organizations still follow the premise that the job of a QA is exclusively doing manual acceptance testing, and everything else is either beyond the scope of their work or the lowest of priorities.
Based on this, said organizations end up hiring people with barely any programming skills, let alone competence in software development.
What do you get from a QA who barely can piece a script together? What if you extend this to a whole team of QAs?
I've had the utter displeasure of having worked for a company whose QAs, even new hires, could not write a single line of code even if their lives depended on it. They had an single old legacy automated test suite written by someone no longer in their ranks that they did not used at all for anything other than arguing they automated some tests. But they hadn't posted a PR in over a year.
The worst part is that they vigorously lobbied management to prevent any developer from even considering writing their own test suite.
You claim developers can be incompetent. What do you call whole organizations who not only fail to do their job but also actively manoeuver to prevent anyone else from filling in the void?
I am going to say that outside the HN echo chamber, it is closer to all than on the other side. Have you been to fortune 1000 non software corps? If you would throw away 90% of their IT people, people would barely notice. Probably just miss John his cool weekend stories on Monday (which is basically almost weekend!). LLM drives this home, painfully; we come in these companies a lot and it is becoming very clear most can be replaced today with a 100$ claude code subscription. We see the skilled devs using claude code on their own dime as one of their tools, often illegally (not allowed yet by company legal) and the rest basically, as they always did, trying to get through week without getting caught snoring too loud.
Most people are mid because you define mid based on where most people stand. The good ones are those who stand out among their peers.
Being mid and competent are two different concepts though. That also depends on the organization you're in. In some orgs, the "mid" can be high-quality, whereas in others the "mid" might even be incompetent.
Yeah, I'm not sure why or how non-technical QA staff are meant to test my implementation of a load shedder. I'm 100% sure they're not going to realise the API is suboptimal and refactor it during the process of writing a test.
> In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.
My opinion is further than that. I tell my people, if you can't get a piece of software working correctly then you have not completed your job.
It is definitely an art and skill that takes guidance and practice to develop, just like designing and writing the code, but IMO it's also the minimum bar to being a complete dev.
Having said that, we do use qa and we do find stuff in qa, but they are typically the types of things that are exposed when linking systems and processes together.
That why all successful open source projects have their own seperate qa team which writes the tests and do the release. Bullshit. The quality is better if the devs do maintain the tests and do the releases.
That why all successful open source projects have their own seperate qa team which writes the tests and do the release. Bullshit. The quality is better if the devs do maintain the tests and do the releases.
Hi, QA here. I want to report a fault with your commenting. It seems you are not using DRY. Sometimes it is better to let a grown-up have a look at your output before deploying it.
> I sometimes suspect that the value of a QA team is inversely proportional to the quality of the dev team.
My experience has been that this is true, but not for the reason you likely intend. What I've seen is the sort of shop that invests in low tier QA/SDET types are the same sorts of shops that invest in low tier SEs who are more than happy to throw bullshit over the wall and hand off any/all grunt work to the testers. In those situations, the root cause is the corporate culture & priorities.
> There are different types of tests, for different purposes.
I'm unconvinced. Sure, I've heard all the different labels that get thrown around, but as soon as anyone tries to define them they end up being either all the same thing or useless.
> Devs should be writing some of them.
A test is only good if you write it before you implement it. Otherwise there is no feedback mechanism to determine if it is actually testing anything. But you can't really write more than one test before turning to implementation. A development partner throwing hundreds of unimplemented tests at you to implement doesn't work. Each test/implementation informs the next. One guy writes one test, one guy implements it, repeat, could work in theory, I guess, but in practice that is horribly inefficient. In the real world, where time and resources are finite, devs have to write all of their own tests.
Tests and types exist for the exact same purpose. Full type systems, such as seen in languages like Lean, Rocq, etc. are monstrous beasts to use, though, so as a practical tradeoff we use "runtime types", which are much more practical, in the languages people actually use on a normal basis instead. I can't imagine you would want a non-dev writing your types, so why would you want them to write tests?
> High quality QA people are worth their weight in gold.
If you're doing that ticketing thing like the earlier comment talked about, yeah. You need someone else to validate that you actually understood what the ticket is trying to communicate. But that's the stupidest way to develop software that I have ever seen. Better is to not do that in the first place.
> Perhaps you should read up on regression tests, or snapshot tests, or consistency tests, or pretty much any flavor of UI tests.
You must have missed the first line: "I've heard all the different labels that get thrown around, but as soon as anyone tries to define them they end up being either all the same thing or useless." We didn't get your definitions. Maybe you could have been first to break the rule. But the definitions others have conceived for these certainly don't.
> regression tests
This one doesn't seem to be used by anyone. "Regression testing" is a term that I can see is commonly used. Did you intend to say that? But, as it is commonly defined, simply means to run your test suite after you've made changes to the code to ensure that your changes haven't violated the invariants...
Which is like, uh, the reason for having tests. If you don't run them and react to any violations, what's the point? We can safely file that one under "they end up being all the same".
Your definition makes no sense, and at most reflects your own ignorance on the topic. I already listed a few and very specific classes of tests, which not only have a very crisp definition but also by their very nature can only be deployed after implementations are rolled out
> This one doesn't seem to be used by anyone.
That just goes to show how clueless and out of touch you are. It's absurd to even imply that regressions aren't tracked.
Listen, it's ok to read through discussions on topics you are not familiar with. If you want to chime in, the very least you can do is invest some time learning the basics before hitting reply.
> I already listed a few and very specific classes of tests
We have a list of ostensible, undefined classes of tests that are imagined to not be able to be written before the implementation. But clearly all of those listed are written before implementation, at least where we can find a common definition to apply to them. If there is an alternative definition in force, we're going to have to hear it.
> It's absurd to even imply that regressions aren't tracked.
Still no definition, but I imagine if one were to define "regression test" that it would be a test you write when a bug is discovered. But, of course, you would write that test before you implement the fix, to make sure that it actually exploits the buggy behaviour. It is not clear why we are left to guess what the definitional intent is but, using that definition, it is the shining example of why you need to write a test before turning to its implementation implementing. Like before, you would have no feedback to ensure that you actually tested what lead to the original bug if you waited until after it is fixed.
Of course, if that's what you mean, that's just a test, same as all the others. It is not somehow a completely different kind of test because the author of the test is responding to a bug report instead of a feature request. If your teammate didn't jump to implementation before writing a test, the same test would have been written before the code ever shipped. The key point here is that "regression" adds nothing to the term. Another to file under "they end up being all the same".
> Still no definition, but I imagine if one were to define "regression test"
Why do you need to "imagine" anything? Just google it. "Regression test" is a very standard thing.
Also, the first commenter was correct. Many, many, many kinds of tests are only useful after the code is written.
TDD works for some people doing some kinds of code, but I've never found that much value in it. With what I do, functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing, depending on the project.
> Why do you need to "imagine" anything? Just google it.
Why not read the thread?
Perhaps the results are regional (in fact, we know they can be), but "regression test" literally returns results for "regression testing" instead, as said before. There is nothing out there to suggest anyone actually uses the term. Even the popular LLMs say the same thing Google does — that "regression test" is merely the act of running your tests after making changes — which is what we simply call "testing". So where do we go from here?
> Many, many, many kinds of tests are only useful after the code is written.
Are you referring to the entire codebase? Clearly once you've implemented the first test then all other tests are going to be dependent on code existing. However, that's not what we're talking about. "Implement" is in reference to the test, not the entire program.
> TDD works for some people doing some kinds of code
"Test first" isn't really TDD, although TDD suggests it too. The idea is way older than TDD. TDD is actually about testing behavioural stories instead of testing implementation details. "Test first" does help ensure that you don't accidentally test implementation details (can't when implementation doesn't yet exist), but it isn't some kind of strict requirement. Technically you can practice the spirit of TDD even if you write tests after.
But out of curiosity, if you ever use a language with static types, do you also defer defining the types until after the implementation is finished? I've never seen that before. In my experience, developers find it easier to specify a part of the program before proceeding with implementing what is specced.
> I've never found that much value in it.
I mean, to be fair, I don't either because why would I ever make mistakes? I most definitely do find the value when others do it, though. But I get what you are saying. I too was once a junior developer with insular thinking. Now that I'm old an experienced, I have to worry about how groups of people interact. That changes your perspective.
> functional testing is highest impact, followed by targeted unit tests for any critical/complex library code, followed by integration or end to end or perf testing
What's the difference? Kent Beck, who is usually credited with coining "unit test", has told on numerous occasions that a unit test is a test that can run without affecting other tests. Which, in reality, is just a test. You would never purposefully write a test that can break another, surely? If only some (or none) of your tests are unit tests, I say you are doing something horribly wrong. Lump them in the “useless” category.
> I too was once a junior developer with insular thinking
My dude, I've been a professional SWE for more than ten years lol. I don't know where you've been working, but I've been in Silicon Valley companies and startups.
I have honestly never met an engineer -- other than interns or new grads -- who didn't know the difference between a unit test and a functional test lol. Or a regression test, either, for that matter.
I'm kind of impressed that someone could read so many sources and yet not take anything away from them.
Unit tests are not "tests that can be run without affecting other tests". Maybe that was true in the 90s, I don't know how code was written and tested back then. That is not how the term is used in modern parlance.
Beck still uses it that way, but I can appreciate that he is only the credited originator, not some kind of official authority. Just because he uses it one way does not mean you use it the same way. I only reach for his definition as it is the only one I am familiar with.
Language is certainly fluid. You are still fairly new to the industry by your own admission, so I can understand that the kids' lingo may have changed by the time you started learning about things. However, for better or worse, I cannot relive your life experience. Google, which models the user when picking results, doesn't help as it returns results that match my past experience. I fully expect your Google searches offer different results, but unless you're offering up your account for me to use... (don't do that)
> That is not how the term is used in modern parlance.
Right, as indicated in the original comment, along with those that followed, I don't know how you use it in modern parlance. What does it mean to you?
> Google "unit test definition". What do you get?
It says that it is a test that runs independently. Which is just another way to say the same as what Beck says.
That's a funny way to say "Actually, you're right. No matter what definition I try to come up with, they end up being either all the same thing or useless", but I'll accept it.
1) There are different types of tests, for different purposes. Devs should be writing some of them. Other types & forms of testing, I agree that this is not in many dev's sweet spot. In other words, by the time code gets thrown over the wall to QA, it should already be fairly well vetted at least in the small.
2) Many, but far from all, QA people are just not skilled. It wasn't that long ago that most QA people were washed out devs. My experience has that while testing isn't in the sweet spot of many devs, that they've been better at it than the typical QA person
3) High quality QA people are worth their weight in gold.
4) Too often devs look at QA groups as someone to whom they can offload their grunt work they don't want to do. Instead, QA groups should be partnering with dev teams to take up higher level and more advanced testing, helping devs to self-help with other types of testing, and other such tasks.