A sentiment among members of a former team was that automated tests meant you didn't need to write understandable code - let the tests do the thinking for you.
This, and stuff like your story, are why I don't trust people who promote test-driven development as the best way to write clean APIs.
There should be integration tests along with some property based tests and fuzzy tests. Usually catches a lot of things.Invest in monitoring and alerts too.
TDD is like relying on debugger to solve your problem. Is debugger a good tool? yes,it is a great tool. But using it as an excuse to avoid understanding what happens under the hood is plain wrong.
The problem lies in industry where software engineering is not given any value but whoteboarding and solving puzzles is.
Software engineering is a craft honed over years of making mistakes and learning from them. You want code asap, kick experience engineers get codemonkeys in and get a MVP.
Quality is not clever algorithm, but clear conscise logic. Code should follow the logic, not the other way around.
And yet tests seem to have made this massive garbage heap actually work and enable a lump of spaghetti to continue to operate as a viable product. It doesn't mean you should write bad code, but it seems like if it can make even the most awful of code viable, then that's a pretty good system. The fact that modern medicine allows the most beat up and desperate to continue to live on isn't an indictment against medicine, it's a testament to it. Don't write bad code, sure. We can all agree to that. Don't prioritize testing? Why? To intentionally sabotage yourself so that you're forced to rewrite it from scratch or go out of business?
I’m sympathetic but this is too strong: what needs to die is dogma. TDD as a way of thinking about the API you’re writing is good but anything will become a problem if you see it as a holy cause rather than a tool which is good only to the extent that it delivers results.
I remember when i realized that TDD shouldn't have such weight in our development as it had gotten (when it was high on the hype curve).
It was when we starting using a messaging infrastructure that made everything much more reliable and robust, and trough which we could start trusting the infrastructure much more (not 100% though, of course).
It made me realize that the reason why we did this excessively large amount of tests (1800+) was because the fragile nature of a request/response-based system and we therefore "had to make sure everything worked".
What I'm trying to get at here is thar TDD assumed the role of a large safety net to a problem we should have addressed in a different manner. After introducing the messaging, we could replay messages that had failed. After this huge turning point tests were only used for what they should have only been used for - ensuring predictable change in core functionality.
(our code also became easier to understand and more modular, but that's for another time...)
What you allude to there is pretty bad TDD. It was never intended as a replacement for good design, rather as an aid to be clear about design and requirements without writing tons of specs up-front.
And I agree, that there are lots of anti-patterns that have grown in tandem with TDD, like excessive mocking with dependency injection frameworks or testing renamed identity functions over and over just to get more coverage. However, I'd argue that is equally the fault of object-oriented programming though.
Where I disagree is this: TDD and unit tests are still a very useful tool. Their big advantage is that you can isolate issues more quickly and precisely, IF you use them correctly.
For instance, if I have some kind of algorithm in a backend service operating on a data structure that has a bug, I do not want to spend time on the UI layer, network communication or database interactions to figure out, what is going on. Testing at the right scope you get exactly that.
The problem with TDD is that the methodology wants to cover every change, no matter how internal, with some sort of external test.
Some changes are simply not testable, period.
No, you cannot always write a test which initially fails, and then passes when the change is made, and when this is the case. You should understand why that is, and not try.
In some cases when you can, yet still should not. If a whole module is rewritten such that the new version satisfies all of the public contracts with the rest of the code, then only those contracts need to be retested; we don't need new tests targeting internals.
It's because the old version wasn't targeted by such tests
in the first place that it can be rewritten without upheaval.
I think TDD is the best way to develop (yet). Obviously tests are code, and if you write crappy highly-coupled tests you will end up with only much more messy code.
This is a clear example of bad testing. The greatest advantage of TDD is in design, everything should be modular and easy to unit test, so you could:
- reproduce bug and verify your bugfix in matter of ms with proper unit test
- understand what code does
- change and refactor code whenever you want
You can tell from what is written that they are not following TDD.
Redesign that codebase in an easy and clean to test design would require an exponential effort and time compared to have it done step by step, but it would be worth it
A unit test is the least useful kind of test. It requires your design to be "easy to unit test" instead of simple, and if you change something and have to rewrite the test you might miss some logic in both pieces.
Plus the tests never break on their own because they're modular, and each time you run a test that was obviously going to pass, you've wasted your time.
As long as you have code coverage, better to have lots of asserts and real-world integration tests.
Integration tests are much slower usually, and you are testing tons of things at the same time. Something breaks (like in that example) and you have no idea of what and why went wrong.
If you unit test properly you are unit testing the business logic, that you have to properly divide and write in a modular fashion. If you want to test a more complex scenario, just add initial conditions or behaviors. If you can't do that or don't know how to do that, then you don't know what your code is doing or your code is bad designed. And that may be the case we read above.
Tests rarely break because they help you not breaking the code and functionalities, and they are so fast and efficient on making you realizing that that you don't feel the pain of it.
I can't imagine any example where "easy to unit test" != simple
in my work with python, easy to unit tests usually makes things a bit harder. You want functional methods , not mega classes with 100s of class variables , and each class method operates on some portion of those class variables. It makes it impossible to truly isolate functionality and test it. While coding though, it is very easy to make a class and throw god knows what into the class variable space and access those variables whenever... However if we have staticmethods , not reliant on any class , just the arguments provided, and it doesnt modify any class state, the test are great. We can change/refactor our models with confidence knowing the results are all the same.
In my opinion, the only thing that is valuable about unit tests is more appropriately captured in form of function, class and module contracts (as in "design by contract"). Unfortunately very few languages are adopting DbC.
Functional tests now, that's another matter. But a lot of TDD dogmatism is centered on unit tests specifically. And that results in a lot of code being written that doesn't actually contribute to the product, and that is there solely that you can chop up the product into tiny bits and unit test them separately. Then on the test side you have tons of mocks etc. I've seen several codebases where test code far exceeded the actual product code in complexity - and that's not a healthy state of affairs.
In more recent times I've seen some growth in interest around contract testing. Unit tests are immensely more useful when paired with contract tests, but unfortunately without them they tend to be more of a hassle. At its essence integrations are a form of a contract, but those suffer their own problems. In rspec you have 'instance_double' which is a form of a contract test as well, but not really sufficient for proper testing IMO. The current state from what I've seen is a little lackluster, but I wouldn't be surprised to see a growth in contract testing libraries for a variety of languages popping up.
My tests are in the test folder. They are actually superfluous since integration tests test for the same thing.
I cannot break up the program in a way that would unit test a smaller piece of it in more detail. They only tests I can add would be to test the command line driver
For a single person and their one-person code base, you can certainly get away without unit tests.
This is especially if your "integration tests" are testing the same component, and not actually integrating with numerous other components being developed by different teams - or, if the system is so small it can run on a single workstation.
Working in teams on larger systems, the situation is different. Part of the point of unit tests is the "shift left" which allows problems to be discovered early, ideally before code leaves a developer's machine. It reduces the time until bugs are discovered significantly, and reduces the impact of one dev's bugs on other devs on the team.
TDD is yet another in a long line of "methodologies" that don't work. Tests are not a bad thing of course. The problem comes when you turn testing into an ideology and try to use it as a magic fix for all your problems. Same goes for "agile," etc.
Programming is a craft. Good programmers write good code. Bad programmers write bad code. No methodology will make bad programmers write good code, but bureaucratic bullshit can and will prevent good programmers from working at their best. The only way to improve the output of a bad programmer is to mentor them and let them gain experience.
The reality of working in teams at most companies is that there are going to be mediocre programmers, and even bad programmers, on the team. Many of the practices you refer to as bureaucratic bullshit are actually designed to protect programmers from the mistakes of other programmers.
Of course, this does require that the process itself has been set up with care, thought, and understanding of what's being achieved.
I'm probably not the best to speak on the topic as I don't use TDD (nor have I), but I think the idea is good if maybe a bit unorthodox: leveraging tests to define examples of inputs/outputs and setting "guards" to make sure the result of your code is as you expected via the tests.
I'm not keen on the "cult" of it, but if expectations of what the output should look like are available from the onset, it would appear to be of some benefit, at least.
I'm confused by your comment. Your premise is that TDD should die, and your support is comparing it to a "great tool". Should TDD really die, or should people just stop treating things as a silver bullet? I personally love TDD, it helps me reason about my interfaces and reduces some of the more cumbersome parts of development. I don't expect everyone to use TDD and I don't use it all the time. Similarly I'd never tell someone debuggers should die and they should never use a debugger if thats something that would help them do their job.
Unit tests are a side effect of TDD, they don't have to be the goal. I'd find value out of TDD even if I deleted all of my tests after. It sounds like your problems are around unit tests, and that is neither something required to TDD nor is it something limited to just TDD.
The problem with integration tests is they are slow and grow exponentially. If they aren't growing exponentially then there's probably large chunks of untested code. Unit tests suffer their own problems, like you said they can be useless because of a reliance on mocking, they can also be brittle and break everywhere with small changes.
Ultimately any good suite of tests needs some of both. Unit tests to avoid exponential branching of your integration tests, and integration tests to catch errors related to how your units of code interact. I've experienced plenty of bad test suites, many of them are because of poorly written unit tests, but its often the poorly written integration tests that cause problems as well. As with most things, its all about a healthy balance.
No, like in some programs when I figure out how to do it correctly the unit tests are either complete tautologies or integration tests.
Then there are the "write once, never fail ever" tests. Okay, so the test made sense when I wrote the code. I will never touch that part ever again because it works perfectly. Why do I keep running them every time?
If the unit tests are tautologies then they aren't testing the right things, and if they are integration tests then they aren't actually unit tests.
I personally run my unit tests every time to confirm my assumptions that the unit of code under test hasn't changed. I also assume all code I write will inevitably be changed in the future because business requirements change and there's always room for improvement. Actually can't think of a single piece of code I've written (apart from code I've thrown out) that didn't eventually need to be rewritten. The benefit of running unit tests is less than the benefit of running integration tests, but the cost of running them is also significantly less. Current project I'm working on has 10x as many unit tests as integration tests and they run 100x faster.
My workflow is usually run my unit tests for the code I'm working on constantly, and when I think things are working run the entire test suite to verify everything works well together. Thats my workflow whether or not I'm doing TDD.
The code that determines truths about the data never had to be rewritten.
Like, are the two points neighbors? I mean, I'm not going to write a version of this function for a spherical board in the future. Nobody plays on a spherical board.
It's also a really boring unit test. Yes, (1,1) and (1,2) are neighbors. Do I really need to test this function until the end of time?
Thats exactly the type of code that should be unit tested. The unit tests are trivially easy to write, and a very naive solution is easy to code up. The tests should take up a negligible overhead in your overall test suite runtime. Then when it comes time to optimize the code because its becoming a bottleneck you can be confident that the more obscure solution is correct.
TDD should only drive the public interface of your "module", if your testing your internals your doing it wrong. It will hinder refactoring rather than help.
TDD doesn't think for you, it merely validates your own existing understanding/mental model and forces you to come up with it upfront. This is hardly a thing to be mistrustful about, unless you work with idiots.
You are right about that, but having code that passes a given test suite doesn't say anything about its secondary qualities, such as whether it can be understood. In theory, a failing test could improve your understanding of the situation, allowing you to refactor your initial pass at a solution, but I would bet that on this particular code base, the comprehension-raising doesn't go far enough, in most cases, for this to be feasible.
That seems orthogonal to testing though. Implementation code can be hard to understand with or without a test suite, at least with test, as you point out, you may be able to understand the behaviour at some higher abstraction.
Rich Hickey called it guard rail driven programming. You'll never drive where you want to go if you just get on the highway and bump off the guard rails.
Except that's a really bad analogy. It's more like you set up guard rails, and every time your vehicle hits a guard rail you change the algorithm it uses for navigation until it can do a whole run without hitting a guard rail.
I've experienced myself how the code quality of proper TDD code can be amazing. However it needs someone to still actually care about what they're doing. So it doesn't help with idiots.
It is not be a good analogy for TDD as properly practiced, but it seems to be very fitting for the situation described at the top of this thread, and that is far from being a unique case.
I don't think it's a generous analogy, but it's poking fun at being test DRIVEN, rather than driver driven. I think he'd agree with you that it's the thinking and navigating and "actually caring about what they're doing" that matters. Tests are a tool to aid that. Tests don't sit in the driver's seat.
Yeah. To me "test driven" really means that I write code under the constraints that it has to make writing my tests sensible and easy. This turns out to improve design in a large number of cases. There are lots of other constraints you can use that tend to improve design as well (method size, parameter list size, number of object attributes, etc are other well known ones). But "test driven" is a nice catch phrase.
Spot on: "Analogies: Analogies are good tools for explaining a concept to someone for the first time. But because analogies are imperfect they are the worst way to persuade. All discussions that involve analogies devolve into arguments about the quality of the analogy, not the underlying situation." - Scott Adams, creator of Dilbert (I know he's quite controversial since the election, but he's on point here) in https://blog.dilbert.com/2016/12/21/how-to-be-unpersuasive/
Scott Adams has been quite controversial long before the elections, ever since he got busted as a sock puppet "plannedchaos," posing as his own biggest fan, praising himself as a certified genius, and calling people who disagreed with him idiots, etc. Not to mention his mid to late '90s blog posts about women.
>Dilbert creator Scott Adams came to our attention last month for the first time since the mid to late '90s when a blog post surfaced where he said, among other things, that women are "treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It's just easier this way for everyone."
>Now, he's managed to provoke yet another internet maelstorm of derision by popping up on message boards to harangue his critics and defend himself. That's not news in and of itself, but what really makes it special is how he's doing it: by leaving comments on Metafilter and Reddit under the pseudonym PlannedChaos where he speaks about himself in the third person and attacks his critics while pretending that he is not Scott Adams, but rather just a big, big fan of the cartoonist.
Yup, in static vs dynamic conversations, I invariably see someone dismiss the value of compiler enforcement by claiming that you should be writing unit tests to cover these cases anyway. Every time I say a silent prayer that I never end up working with the person I'm talking to haha.
I don't see how this is an argument against TDD. Apparently a whole slew of things went wrong in this project but that doesn't imply that testing is the cause of them.
TDD only works in conjunction with thorough peer reviews. Case in point: at my place of work, code and tests written by an intern can go through literally dozens of iterations before the check-in gets authorized, and even the senior engineers are not exempt from peer reviews (recent interns are especially eager to volunteer).
This, and stuff like your story, are why I don't trust people who promote test-driven development as the best way to write clean APIs.