Hacker News new | past | comments | ask | show | jobs | submit login
Poll: Do you test your code?
611 points by petenixey on March 14, 2012 | hide | past | favorite | 339 comments
Do you have tests that run every time you push and ensure that the functionality on your site works?

There's always a lot of debate around testing and I'm interested to see how much people do and how satisfied they are with it

IF YOU'D LIKE TO ENCOURAGE OTHERS TO ANSWER, PLEASE UPVOTE - TY

We'd like to do more testing but it's too much overhead
2080 points
We don't really test much
1249 points
We have a test suite that tests a few critical things
1198 points
We have a test suite that tests all functionality
895 points
We are happy with the amount of testing we do
700 points
Tests? We don't need no stinking tests.
258 points
[ AND ALSO CLICK ON AN ANSWER BELOW... ]
218 points



I answered "a few critical things" ... but, for the most part, testing is tedious, frustrating, and a time-sink for me. I recently paid someone $100+ an hour for some remote TDD coaching. It's helping a bit but hasn't really change my attitude towards testing (yet).

What bugs me:

- Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]

- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ...

- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite

- More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful

Feel free to try and convince me otherwise, but I don't ever see myself in the "test ALL the things" camp.


My approach to testing is not to be obsessed with the latest, greatest framework or 100% code coverage.

I try to start with just one or two tests to actually help do things that are tedious or require multiple steps. It takes some time to automate a good test but once you do it immediately starts saving time because you don't have to run the same sequence a thousand times while developing. You can think of it more like a macro that saves you time.

Once you write the main test it's easy then to run it with all combinations of good and bad input. By doing that you'll often wind up hitting a pretty good percentage of your code.

Then as bugs are discovered due to unexpected input you can just keeping adding more input situations.


Testing is like jumping in cool water, one find so many reason to not do it beforewards, but one is so happy to have done it afterwards...


Look at it this way: you must be testing code as you write it anyway. There's really no other sane way to do it. You make a change, you load the page and see that your change worked, or you call your new function from an interactive interpreter.

Smart automated testing just takes all that extra test work you're already doing and saves it as you go along.

No need to try to invent extra things to test. You just test what you would have tested anyway by hand.


You think like that until you hit your first serious regressions and discover it has been in the code base for several months and that the person responsible for it has left.

I used to work at a company where automated testing was sending you emails about what your commits broke. It does help at improving code quality.


But then you'd still do the manual test after you complete your code. Nobody (I hope) codes blind hoping it would work or caught later by a test suite. Test suits don't reveal everything. Only what you tested for.


The process goes more like this:

1. Write some code

2. Test it

3. Debug your code

4. Test it

5. Debug your code

6. Test it

...

39. Debug your code

40. Test it; now it finally seems to work

If all your "test it" steps are being done manually, you're being very inefficient. A good unit test can actually make development go by faster with the added bonus of defending your code from changes down the road that might screw things up.


That's all fine unless you're exploring a solution space. Then the overhead of writing tests which are thrown away _in entirety_ is outrageous. AFAIK TDD really only works if you're either a) prepared to waste a huge amount of time writing tests which will later be completely redundant or b) working to a very clear set of requirements with tools and frameworks you already know intimately. </Rant>


I find I spend significantly more time refactoring/maintaining code than I spend writing exploratory code. It's silly to write tests for prototype work, but once you're actually close to having a working prototype, tests help. Having decent test coverage saves so much more time when refactoring/maintaining.

TDD isn't "THE" way, but test coverage helps. It's not fun (at least not for me), but it's less aggravating than breaking something 6 months down the road in some non-obvious way. I'm human, so I assume I'll screw something up eventually. Having test coverage helps keep me from shooting myself in the foot later.


Different levels of tests work here. I usually start with a very high-level test and then as I implement I do unit tests once I have a reasonably high confidence that the units are a good design.

You should often be able to at least create an automated acceptance test for what you're doing (e.g., "as a user I want to click this button and see XYZ"). This is usually extremely decoupled from the implementation so it should survive refactoring. So then do your exploratory code, get the test passing, and then refactor, introducing lower-level tests.

If that doesn't seem doable you might be taking on a task that doesn't have a good set of requirements. Writing code without any concrete use case in mind is fun and all but that kind of code should usually be limited to a prototyping sandbox.


"If that doesn't seem doable you might be taking on a task that doesn't have a good set of requirements." Or it might have perfectly good requirements which are very hard to write automated tests for.

Consider (for instance) a program to translate ABC music format to proper sheet music. It's easy to say the basic requirement: "The program has to accurately translate ABC format to legible and easy to read sheet music." But even a start at automating a test for that would require converting a graphical representation of sheet music back to the basic notes, and that problem is at least an order of magnitude harder than writing the original program, without factoring in the "easy to read" bit at all. (PS This is a real issue for me, if someone knows of a decent open source sheet music to MIDI/ABC convertor I'd love to hear about it.)


The mistake here is that what you're describing is a functional test, not a unit test.

A unit test for a piece of code like this might be "Given that the time signature for the music is 3:4, the software puts 3:4 in the appropriate place on each line of output".

You then might write a variety of cases testing that it deals correctly with (say) a changing time signature at some point in the piece.

The upside of this is when you try and fix another bug which has a knock on effect on this bit of code, lots of your tests are going to fail- immediately identifying where the problem is (or at least letting you know there is one!)


> A unit test for a piece of code like this might be "Given that the time signature for the music is 3:4, the software puts 3:4 in the appropriate place on each line of output".

Don't you still have the same problem colomon described here, though? Testing your stated condition "the software puts 3:4 in the appropriate place on each line of output" still implies some form of image recognition on the graphical output.


Who said anything about unit tests?

But okay, is it really so much easier to write a test which just tests changing the time signature? It's still going to require doing OCR. And if I'm really testing it, I've got to make sure the time signature change comes at the correct point, which requires OCR on the notes around it. Also the bar lines (to makes sure the time signature change is reflected in the notes per bar and not just by writing a new time signature out and otherwise ignoring it).

Now, it's perfectly reasonable to have unit tests that the code correctly recognizes time signature changes in ABC format. (And it looks like I forgot to write them: I see inline key changes in the tests but not inline time signature changes.) But that's only testing half of the problem; and it's the easier half by far.


PS Now have unit tests that the ABC parser can parse time signature changes.


Actually, I've written tests for half of that problem: Parsing ABC text into an internal format. That is a very clear problem and one just needs a representative set of input files (I now have around 500 of them -- the tests still run in less than a minute. It's true that I haven't been able to figure out a useful way to have an automated test for the drawing part. Here's my project: http://code.google.com/p/abcjs/


in TDD what you are talking about is called a "spike", just write a bunch of code to try out assumptions and find a direction to go that you are reasonably sure is a good one.


Sure, but what you're describing is prototyping.

Prototype all you want without tests, but once you've settled on a design & are ready to make it production-ready you should spend time at least writing unit tests for your work or (ideally) take what you've learned & re-apply to a clean design written in a test-first manner.

You might think this is a waste of time, but putting code into production without tests is going to give you more trouble in the long run.


Of course you have to know your tools. I usually start by doing test cases, then writing the real features while spork and whatchr evaluates my new code every time I save. I rarely even open a browser, it's the final thing I do when my tests are green.

It doesn't slow me down, and I can be sure that my feature is there and works even when somebody refactors our software.


That's mostly a beginners problem, I know several people where it's mostly:

  1. Write a lot of code
  2. Test it
  3. It works!
  4. Test it
  5. It works!
  6. Test it
  7. It works!
  ...
  15. Test it
  16. It works!
  17. Are you done?
TDD in no way makes 17 any clear, because every test they thought of before writing the code works more or less the first time. And that's the core problem with testing, for a solid developer what fails is has nothing to do with the code it's always a question of edge cases they did not think. (Wait, some sales people are their own managers and outside consultants at the same time? well just bob) You can force these people to write tests, but it really does just slow them down.


Not groking TDD, I literally worked my way thru the book, doing each and every step, just to get the gist of the experience.

TDD works fucking great. If you know what you're doing.

Alas, that's a big IF. Most of the stuff I do, I'm just figuring shit out.

Mostly, like when designing a new library, I work outside-in. I imagine how I'd want to do something, writing the client pseudocodeish stuff first, and then trying to make the implementation support my idealized programming mental model.

I end up throwing away A LOT of code. Getting something short and sweet takes a lot of experimentation, most of which are duds.

Though my personal approach of outside-in is trivially like TDD, it's not nearly as rigorous. Were I to be as thorough as TDD, I'd be spending all my time writing tests. Which seems pointless, for code I'm like just going to throw away.

Anyway. Much respect for the guy who wrote that first TDD book. It's one of the few methodological strategies that works as advertised.


I do strict TDD when I can, and I consider spiking out things part of the process. If I need to approach a problem that I don't know how I'd solve just yet, I create some sample code I later trash and do a lot of work in the Ruby console.

Then, once I've gotten an idea of the problem, I can start writing out some pending tests that help me figure out structure, and then I'll start into the strict TDD loop of write a bit of test, watch it fail, make it pass, write more test, etc.


You may want to try bottom up coding. The idea is to build things that would make solving the problem space easier and stacking them.


what book?



It's not about telling you when you're done a feature. It's about leaving step 17 with a set of tests so that the next person in the code can tell when he's done without doing steps 1-17. And you'd think it slows you down, but it really doesn't. Some advantages of TDD are less context switching (you can test your code without even leaving the code itself) and a high degree of focus (every atomic subtask has a very clear completion criterion: fix the failing test). Those are things solid developers love.


I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not. You are right that it will work more or less, but they work out the "less" part of that statement sooner rather then later. In my experience, that is a place you get to over time, only the people out of college write code for multiple hours straight, then debug everything afterwards.

That is actually the primary goal of tdd, to free you from the more mundane aspects of the code/run/debug loop. The secondary goal is to give you a good base for changing the code later and finding out what broke, again without a ton of manual actions. But as useful as that is (and it is extremely useful), it doesn't hold a candle to the first benefit.


> I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not

I do this all the time. Two reasons:

1. I can keep in "the flow" for an extended period of time. This is more important if the code is especially complicated. If I have to stop every few minutes to fix trivial errors, it's easy to forget important details of how everything is supposed to work.

2. Not having any feedback forces you reason about the code before writing it. It's very easy to fall into the trap of writing code, then waiting until it is tested to find the errors. Thinking before writing is the fundamental skill that TDD encourages, but you don't need TDD in order to do it.

> only the people out of college write code for multiple hours straight, then debug everything afterwards.

Knuth wrote TeX in a notebook and did not test it for a good six months afterwards, though I am not aware if he was out of college at the time.


> I have never seen any half way decent developer write code for more then a few minutes without some sort of feedback, automated test suite or not

Reading that again, I'm sorry if it came off as sort of attacky, but I really meant that as a "from my personal experience with the people I have worked with over my career" type qualification :)

I can buy #1, but only when it is something you've done a bajillion times before. When you are getting feedback every few minutes, you know exactly what introduced the problem, and don't waste time tracking things down. If you do miss something fundamental and have several hours work behind you, you tend to be more inclined to hack out something to make it work, where if you catch it a few minutes in, you can adjust your design to take it into account. I also find I can keep in the flow pretty easily with constant feedback, and I use simple todo lists to make sure I don't lose track of things.

As for 2, at least for me, I don't think there is any comparison between thinking about how things should work, and knowing if things do work before writing. TDD is definitely not a replacement for deep thought and planning, but I think that is a different beast then working out the details as you are writing them, which is where it comes into play

> only the people out of college write code for multiple hours straight, then debug everything afterwards.

I sort of did it again there, I should have qualified it more :) In my experience, the better programmers I have worked with, paired with, and watched code in videos will get feedback as quickly and often as makes sense, be it with tests or without them. I know if I wrote TeX in a notebook, it would be a guaranteed unmitigated disaster :)


I think most of you guys missed the point. Writing tests is very important WHILE writing code, to write it better. We MUST write tests not only to catch regressions, to be sure that certain invariants will be manteined. But we write tests to check if we are writing good code. I need to write a class to do some stuff. The test is the first user of this class. If I cannot write the test very fast, and I see that I'm spending a lot of time doing it, this means that my class is poorly designed, is not flexible, is not very reusable. Maybe I'm doing something wrong with my app design. If I'm writing good code, reusable and clean code, testing is easy and fast. Testing help me to check immediately what's going wrong with the code, not only in term of bugs.


Didn't Knuth work on TeX for like 10 years?

Not all of us have 10 years to work on everything. ;)


That's the point I was trying to make. The main benefit comes from eliminating the "run/debug" part of the "code/run/debug" loop. It then just becomes "code/test" where "test" takes all of a couple seconds each time.


A couple of seconds is too long. I run a small test suite in a couple of microsecs every time I save a file, and save my file at every change. When a task is done I run all the tests.


But if you don't run it yourself, how do you know it's working? All you know is that the tests pass.

Tests can easily do more harm than good if you let them give you a false sense of security.


But surely if you have written a unit of code, you should at least know a. What valid input the code should have, b. what output the code should return, and c. What you want he code to do! If you know these things, then wouldn't it be easy enough to write tests for at least these conditions?


It's not that. I wrote a fairly complex piece of code in, of all things, TSQL, and as the logic was unfortunately in the stored procedures and functions I actually found that the unit tests I did for the more granular functions saved me a lot of time. This was because I would make a change to the logic of a function that other functions/procs relied on and the. All of a sudden I would find that a whole bunch of tests on functions that worked before started failing. I'd never have known this without the tests that DID work previously. Saved me a lot of time I can tell you :-)


Then you didn't write good enough tests. I have deployed code that thousands of customers see without manually testing it. If my tests are green, I'm confident in deploying my code.


What tests your tests?


No one. That's how much faith I had in that test.


Even if you test manually what you just changed, in a relatively complex codebase how can you guarantee that your changes haven't broken behaviour in a separate yet related aspect of the system?

This is what I find the major advantage of a comprehensive test-suite to be, I don't have to worry as much about breaking any part of the system as a whole - if my suite passes, then I know everything I've worked on so far works, not just the bit I think I changed.


You can never actually guarantee anything even if you have tests. The tests will only increase the likelihood of catching regressions.


so true!!


Smart automated testing sounds amazing, until you realize that it's not smart at all. The dumb computer that you're ordering to do your bidding is the same dumb computer that is going to be running your tests, and chances are the programmer is invariant as well. In short, good programmers need unit tests less and bad programmers will write bad tests. You can't fix a personnel problem with technology.


Exactly. The computer will just repeat what you tell it to do. There is the chance that you will tell it wrong (a bug in your test code), and the chance that what you told it is not true anymore. An automated test basically saves you the work of doing the same thing over an over again as you develop.

But we must keep in mind that maintaining test code has a cost. Automated testing is not a holy grail and it isn't useful 100% of the time. You should carefully decide what code is worth testing via automation.

Unit tests in general don't catch any regressions, they only help you develop.

Functional tests might be useful, but only if you are testing something that is not likely to change much. E.g.: It is not worth to automate testing the UI if you are going to completely redesign it next month.

Manual testing can actually be cheaper than maintaining test code.


> - Testing frameworks and "best practices" change way faster than language frameworks and I simply can't keep up. What rspec version do I use with what version of Rails? Now I have to use Cucumber? I learned some Cucumber ... oh, now Steak is better. [rage comic goes here]

I think this is only in the Rails community, where all new things is quick to be pronounced "the new right way to do things", not just in testing.

> I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite

We do the same (tests for interaction with EC2, Github, and a few other providers). It is more expensive, but we find it more worthwhile too. Normally, 3rd party APIs are insufficiently specified, especially for error conditions. So when we have a failure in production, we can easily add tests to make sure we handle that edge case in future.


People write and play with test frameworks because they are procrastinating from writing actual tests. Think about it.

Just use Test::Unit and move on with your life. Write some tests. That's what counts.


I'm working on a single page app that's about 30% Rails and 70% Coffeescript/Backbone.js. Test::Unit is practically useless for us since users never hit any plain HTML pages besides the login page.

Imagine the current HTML5 Pandora having bugs with one particular song in Chrome only. How do you test for that using Test::Unit?


there are a bajillion libraries out there for unit testing javascript (something test::unit-ish would be qunit). rails + backbone + JST style templating actually makes TDDing your js relatively painless


We're looking into integration-level testing for our Javascript since it'd give us full coverage of the bugs we see.

I forgot to mention we also use Socket.io for real time push updates to user Backbone models. A good number of bugs don't crop up until another user modifies data and those changes get pushed out to another person.


Testing with front end javascript is more difficult due to different interpreters, much more variation in environment, etc. That doesn't mean you can't have tests for your core business logic at least in something like Vows or Mocha.


You don't. You use QUnit and move on with your life.


I strongly feel you should try to add one test in each category. That adds a sanity check and lowers to cost to adding more tests when you really need it.

It's pretty painful to think "oh, this really needs a test, but I haven't got a test suite set up and besides, I don't know to write a test of this kind".

Writing tests for edge cases we see in production is the most valuable thing we do. We use Airbrake to find the bugs, and then we add a test for it, if possible (it's not always possible).

That gives us good confidence that other changes aren't fucking things up. It's also a pretty sane strategy for growing a test suite when you inevitably have some portion of your code which has no tests at all.


"- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ..."

This is why regression tests are my favorite type of test. The need for the test has been confirmed by real world usage and once you create the regression test to fail, fix the bug, and pass the test, you won't have to ever worry about users seeing that bug again :)


but I don't ever see myself in the "test ALL the things" camp

Good for you. Extremists on all sides are usually wrong.

Shoot for "test MOST OF the things" or "test the MOST IMPORTANT things" or even "test just enough things so that you know if change Y totally breaks MOST IMPORTANT feature Z".


I'm pretty far into the extremist side of TDD, and I'll say that there is a thing as TOO many tests. Your test suite needs to run fast to be really useful.

If you have thousands of full stack integration tests that takes an hour to run, you're not going to run them as often as you should be, if at all, ad might as well delete them.


If they are run by a continuous integration server on check in, it kind of doesn't matter how long they take to run.


Completely false. You need to know that your change doesn't break the build, and if you wait a long time, you will have mentally switched gears when informed of the breakage. This is a big productivity sink.

(What's worse is when changes come in faster than the CI system can run the tests. Then you don't know which change broke your build because many changes were tested at once.)

Anything longer than 10 seconds is too long, in my opinion. As soon as you can get up to get coffee while your tests are running, you've lost a lot of productivity. (I recently finished a project where the tests took about ten minutes to run. That meant I could only change code 50 times per day. If the tests had taken 10 seconds, I would have been able to be make 300 changes per day. That's a 6x productivity increase right there.

Fast running test suites are absolutely essential.


That's not strictly true: you'd still like to get reasonably quick feedback, especially if you're trying to make a release.

It's also nice to be reasonably confident that your commit won't break the build, for which you should probably run a good chunk of the tests before committing.

There are some workflows (Gerrit springs to mind) where you can let the CI server work on your code without breaking anything else, but even then there's a cost to the context switch when a test failure means you have to return to a piece of code you thought you'd finished.


Then they need to be moved to a different place, at least. Tell your CI server to use them, but more them out of the way for normal developers.


My experience shows that tests is not very useful in protecting from "hard" mistaken (like unusual combination of inputs, missing condition branch coverage, etc) because even with 100% code coverage you don't actually cover 100% of input/state combinations. And things you didn't think in development are usually things you didn't think in tests too. Tests are, however, always been amazingly helpful for me in:

1. Protecting me from stupid mistakes like using wrong variable in parameters, etc. (yes, it is embarrassing to have something like this, but I better be embarrassed by test and fix it before anybody seen it than be embarrassed by somebody else hitting it when using my code).

2. Ensuring refactoring and adding new things didn't break anything.

3. After "hard" bug has been found, ensuring it never reoccurs again.

As for dealing with authentication, etc. - that's what unit tests are for, testing stuff that is under these layers directly. And I don't see it matters what you are using for tests - almost any framework would do fine, it's having tests that matters, not how you run them.

I think you can unit-test javascript too, though I never had to deal with it myself so I don't know how.


I think that the main advantage of unit testing is that you have to write testable, modular code. It ensures a sound design, which is the cheapest phase to catch bugs. The regression proofing is not a particularly big advantage of unit testing since functional and integration tests catch more bugs anyway.


You should check out QuickCheck for catching edge cases you did not think of. The idea behind QuickCheck is simple--you specify invariants in your code (called "properties") and the framework tests them with random inputs.

This tool is very widely used in Haskell, but it's been ported to a whole bunch of other languages and could make your testing more thorough. In Haskell it's also easy to use and more fun than normal tests, but I don't know what it would be like in a different language.


At first I was doubtful about testing my JS code, but nowadays I do enjoy it much more that testing the Rails backend. I use my own gem guard-jasmine that runs the specs headless on PhantomJS and it's a real joy! My whole spec suite with over 1000 specs runs in under 3 seconds. I use SinonJS for faking AJAX calls to the backend, but that's just a small subset of all specs since most stuff isn't interacting with the backend.


The point of testing/TDD for me is not (just) about preventing bugs, it is more about having quick feedback. Running a test is faster than waiting until it is deployed and manually clicking around in an application. It is kind of comparable to using a REPL.


"- Most bugs/edge cases I encounter in our production apps are things I'd never think to write a test for ..."

I feel that way often too but I write test more as a specification for how I want the code to work then as a catch all bugs thing.

"- I deal with custom domains, authentication, and 3rd party API calls in almost every app we have. IMO, this adds 20% or more to the (already high) testing overhead just to get these things configured right in the test suite - More code is moving to front-end Javascript stuff ... so, now I have to write Rails tests AND JS tests? Sounds delightful"

I feel your pain, I code stuff that use WebGL currently and I find it hard to test that stuff.


How do you feel about regression testing? Maybe instead of "writing" tests for potential bugs, you write tests for bugs you've found already.


Regression testing is when you check that new code doesn't break existing functionality. It's preventative, not reactive.


I've found tests very useful for refactoring. I can pretty much go wild, as long as the tests pass at the end.

About bugs in production, after you find a bug write a test that exercises that bug. Then make the test pass. That way, you're unlikely to ever have a regression on that bug.

For browser-side UI tests, selenium is very useful.


I don't want to convince you - just strengthen your point.


I test things that seem like they're important to test. I also do a lot of manual checking which boils down to "does it work?" When the manual checking is too tedious I'll write code to help. I don't do unit tests (but I don't think most people who think they're doing unit tests are, either). In general I have three big problems with the philosophy of testing, especially test-first. (Though I don't feel incredibly strongly about these--software is a big field of possibilities, to suggest One Way is the Only Way is pretty crazy.)

The biggest is that it encourages carelessness. I want to grow more careful and work with careful people, not the other way around. Tests don't seem to make people better at doing science--that is, people test the happy-case and don't try and falsify. Testing doesn't seem to make people better are writing code, and may even be hurtful. Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down. Sure, I guess your entire project might depend on that one innocent-looking line of code you just changed, but if that's true, you have some serious design problems and testing is going to make it hard to fix those. Because, thirdly, it hinders design, it's very easy to code yourself into a corner in the name of passing a test-suite.

Related to the design issue is a simple fact of laziness. Your code makes a test fail. Is your code wrong? Or is the test wrong? Or are both wrong? If just the code is wrong, the correct action is to fix your code to fit the test. (Which may have serious ramifications anyway.) If just the test is wrong, the correct action is to change the test. (How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.) If both are wrong, you have to change both. Obviously people will be motivated to assume that only one is wrong rather than both because both means more work.


> Secondly, testing instills a fear of code, like code is a monster under the bed that could do anything if you don't constantly have a flashlight under there pinning it down

In my experience, testing frees you from that fear. You have empirical evidence that you haven't broken things.

My company does Continuous Integration as a service. You would be utterly amazed at how often our customers break their code with tiny innocuous changes.

> How many people test their tests for correctness? Then test their test-testing programs for correctness? "Test all the things!" is an infinite loop.

Try to think of testing in terms of the value it brings to your business. Adding the first few tests to a module has immense value. Adding tests for the edge cases has some value, but you're probably at break even unless it's breaking in production [1]. Adding tests to test the tests? I would say that is valueless in nearly all cases [2].

[1] Bonus: use Airbrake to find the edge cases that happen in real life, and only add tests for them

[2] If you're writing software for cars, planes, medical stuff or transferring money, there is probably value here.


Asking if the tests are correct is really asking if the requirements are correct. If this happens a lot it means developers are writing code before they really understand the requirements. If developers have to re-write behavioral level tests a lot, it probably means the product owner/project manager/managers/stake holders/etc. are changing the requirements. A lot of pain should be felt gathering and verifying what the customer wants before a single line of code is written. Really, code is bad and as little of it should be written as possible. Developers should yell loudly when they have to re-write behavioral level tests.

Testing at the behavioral level/systems level/UX level is really verifying a lot more than just "is this code right". It provides a way to check correctness on the specifications, correctness on the behavior, complete coverage of expected usage by the end user, and assures that only the code necessary to get the behavior to work is being written (to name a few).

The carelessness I see are developers writing code without fully understanding the needs of the stake holders. The industry would be in a lot better position if managers/product owners/stakeholders/etc. were expected to provide a good set of behaviors to develop against (as an example, Gherkin or similar tools) before they start pushing developers to "deliver something on time". Note this is at the systems/behavior level and not at the Unit level.

Unit level tests provide robustness. Developers can never assure that software has no "bugs".

Behavior level tests assure completeness. Developers can assure they are meeting the requirements (Developers can't assure they are making what the customer wants: but that is not the responsibility of a developer. That is the responsibility of product owner/project manager/etc. I'm not saying that a developer can't ware that hat, but a developer not wearing that hat should not be held responsible for failings to provide for the wants of the customer).

All that being said, I can not emphasis enough how important I think Behavior Level testing is.

My 3 cents.


What one person calls carelessness, another would say freeing up the time to consider other things. Such as, the code actually doing what it needs to. We are limited beings and can only keep so much in our heads at one time. If I have to remember how everything works at some level and then want to tackle how to clean it up (refactoring) or add something new without breaking it, that is a tremendous amount of state I am managing in my brain.

Better to write tests to assert something works as expected. Then focus on what you actually want to do, finally returning to your tests and focusing on your changes impact.

If people are writing shitty tests, that is a different problem.

As to your second point, I am fearful of code that does not have tests. I do not know what it does, I have next to no confidence that it does what it is supposed to and no way to validate that I haven't broken it if I change it.

I find the whole pushback for tests automation very odd. Here we are working towards automating some business process, while manually testing that it works. Why wouldn't we automate the testing too? If you are not good enough to automate most of your testing, what business do you have automating something else?


I'm pretty much the one man code shop for our startup and I still write a lot of tests. The way I think of it is this: if something is tricky enough that I need to verify it in the repl, may as well capture that validation in an automated test. The trickier, more painful tests to setup are integration tests that make sure everything is hooked up correctly, from the datastore layer to the handler to the template arguments etc. I went through the pain to set this up so that we at least have smoke tests, e.g every page is visited with some data populated to make sure nothing blows up.

A good reason to write tests beyond QA is to verify your code is at least somewhat modular - being able to get code under test ensures at least one additional use beyond being hooked into your application. For that reason, I would recommend having at least one test for every module in your code. It also makes it easy to write a test to reproduce a bug without having to refactor your code to be testable after the fact.


Early on, I asked most YC founders I met whether they did testing in the early days, and almost all of them said "no". I've also not written tests in the past simply because it's a time investment--why test if you could be working on something entirely different in a few weeks? Code can be very volatile in an early stage startup.

Think it makes more sense the later stage your startup is where you're more certain of what exactly it is you're building.


I honestly use tests more as a design tool than for testing functionality. After that you end up with a kind of a regression test suite.

It's cool to try to use the API you're building before you build it.


Even when you're prototyping, I find it useful to write one test. The gains from the first test are the biggest - pretty low investment, with reasonable returns.

It won't be great, but it will provide some form of sanity checking when you work on other stuff. Of course, it informs the design, which is a very overlooked feature of testing.

Lastly, it provides a foothold for more tests. When you're working on something hairy, there won't be any obstacle to "well, maybe I'll just add this one more test to save me some time".


If you are throwing your code away every few weeks, it is probably wasted time. If your codebase is in a lot of flux, it will save you a ton of time, since a good test suite tells you what breaks every time you change something.


This is the right answer.

If you are trying to really quickly get code in front of users, and are working through a lot of ideas that don't end up going anywhere (code that is eventually thrown away), then heavily tested code is probably not the best use of your time.

Once you get a product with some traction, and are going to be working with a codebase for some time (especially a code base that will be growing), heavily tested code is invaluable.

Example: Upgrading a large Rails app (~250k lines) from rails 2.3 to 3.0 in eight weeks. Having roughly a 1:1 code:test ratio allows us to be extremely nimble. It also allows developers to work in almost any area of the codebase with confidence.

For apps that will be around for awhile and will be growing, a large test suite is indispensable.


These options are flawed. I am somewhere in the middle of of the first two: mostly integration tests, with critical domain logic unit tested. Certainly not 100% of the app's functionality, closer to 80%


I agree. this poll forces me to choose between a test suite that tests "all functionality" and "a few critical things". I think a lot of people who value high levels of testing coverage still fall somewhat short of all functionality, but are way above "a few critical things".

I'm using rails these days, and I have 100% test coverage on models and controllers (though that really just means that all the model and controller code is executed when I run my tests, these tools can't really tell if you've tested the code intelligently, though I hope I have).

I don't have a full suite of integration tests that validate all of the view logic, though there are some checks. I also have integration tests that validate external dependencies (file storage, database connectivity, etc), though again, there may be some holes.

I picked "all", since that's closest to where I am. But my best choice would be "we maintain a high (95%+) level of testing coverage". I don't think I'm splitting hairs here, because there may be a practical tradeoff between high levels and complete levels of test coverage.

NOTE: "high" levels of testing can mean different things to different people... doesn't have to be 95%, which I would consider to be higher than absolutely necessary. It depends so much on what you're actually testing (anyone who has used a coverage tool knows you can often "trick" the tool into awarding the 100% bar without doing much other than just making sure the tests run the code... which is useful in its way but can let all kinds of errors slip through).


Don't underestimate the value of running all the code. In a non-compiled language that will really save your bacon if you need to change things later.


I agree, it be useful - if nothing else, it confirms that all of your code still runs after you change something.


I'm had the same thought Obie. I find high level integration tests provide most of the value for me, with unit testing when I need help with designing code. Having a decent suite of high level tests saves me from having to smoke test the entire app every time I make sweeping changes. If the suite is passing, I know the features are working, at least in the basic cases I was testing for. I still have to do some level of manual testing, but it's nowhere near as much as I did before I became more obsessed with testing.


Interesting... I think I'm with you on this one. There have been a few occasions where I got my test coverage through integration tests rather than unit or functional tests as well.

My real goal is to have tests that will sound the alarm if I've done something that breaks the application. I think this is similar to the "smoke test" you're talking about. Don't want to have to fire up the server and walk through all the use cases - it's very useful to have integration tests that will do this instead.


Agreed. would have been great if manual testing was included. We have full time QA people who actually write very detailed test plans based on project specs/requirements and have time included in all our projects for testing and bug fixing at the end.


Never forget you write software, not tests. Tests are here to increase quality, they have no raison d'être by themselves.


I think you can go one step further. Never forget you're serving your customers, and your software has other raison d'etre. You only write software to provide value to them, so think of testing the same way.

Each test has the opportunity cost of writing some part of a new feature for your customers. But so does every minute spent of fixing bugs that would have been caught with more testing, at a fraction of the cost.


I suppose, but what is the value of untested code? This sounds like an excuse for coding without testing.


There are many ways to improve code quality. Using an automated test suite is only one of them, and while it's one that is widely useful, it is of very limited value in some circumstances and I think for some developers it instils a false sense of security. Not having an automated test suite that covers a particular part of your code does not imply that the code is "untested" or of no value. It just means some other approach is needed in that case.


Not having automated test covering a piece of code does not imply that it's untested at the time it's written, but it sure as hell implies that it's not getting tested when seemingly unrelated feature X gets refactored and unknowingly breaks it.

Tests are only marginally important at the time you're writing the code they test. The real value comes months later when something else causes the test to fail, and now you a: know the code is broken, and b: have a clear specification what what that code was supposed to do.


Sorry, but I simply can't agree with most of that. I do agree that automated tests are more valuable during maintenance than during initial development, though I think they help then too. It's the other details of your comments I'm disputing below.

Firstly, even if automated testing isn't appropriate for a particular part of the code, there should still be other forms of quality checking going on that would pick up a broken feature before the code is accepted, and certainly before the product ships. If this doesn't happen, you're relying on a limited set of automated tests as a substitute for things like proper code reviews and pre-release QA, in which case IMNSHO you're already doomed to ship junk on bad days.

Secondly, if you can break one piece of code by changing a completely unrelated bit of functionality elsewhere, you have other fundamental problems: your code isn't clearly organised with an effective modular design, and your developers demonstrably don't understand how the code works or the implications of the changes they are going to make before they dive in and start editing (or even afterwards). Again, you're already doomed: no amount of unit testing is going to save you from bugs creeping in under such circumstances.

Finally, unit tests are not a clear specification of anything, ever, other than the behaviour of a specific test.

Basically, if you consider automated unit testing a substitute for any of

(a) maintaining a clean design

(b) doing an impact analysis before making changes to existing code

(c) writing and updating proper documentation, including clear specifications, or

(d) proper peer review and QA processes

then I think you're suffering from precisely the false sense of security I mentioned earlier. In many contexts, unit tests can be great for sounding alarm bells early and giving some basic confidence, but even in the most ideal circumstances they can never replace those other parts of the development process.


QA itself is a process failure. If the testers have ever repeated an action more than twice, they should be automated, and you're back to automated testing.

The only QA I've ever worked with that was worthwhile spent their time writing automated tests - they were programmers concentrated in test. Otherwise, you're literally saying 'It would be cheaper to pay this room full of people to do what a machine can do instead of paying 1/10th their number to write the same thing as a test', which is essentially never true.


tests are a lot more about design and refactoring than they are about quality.


what

what are you even saying

i dont even

(

More concretely, if you use testing to drive your refactors and architecture--as opposed to, say, finding pain points in normal code or actual design time in preproduction--I would be concerned that you are "guardrail programming", as a gentleman put in a talk I saw recently.

When we drive, we don't have guardrails to bounce us back on the road every time we veer off--they're there to protect us against accidents or when something goes seriously wrong with our vehicle. If you told somebody that you drove from city A to city B by hugging the guardrail, they'd say you were nuts.

Similarly, depending on unit tests to do design is strange--they're there to be sure that your code functions according to contract.

)


I disagree completely, and your comment makes me think you've never seriously used unit testing.

Writing tests makes you think about how pieces of your code interact with each other, dependencies etc.

As an example, if you're trying to test Function A and are finding you need tens of lines of setup code to be able to do so, then that would be a warning sign that you may want to think about refactoring out some of those dependencies


I've seen code in productions apps with comments along the lines of "this isn't optimal, but it's easier to test". Every time I do, I die a little inside.

Unit tests are THE most overrated buzzword of the last 10 years.


If you mean that unit tests have accumulated a lot of dogma over the past few years, and you are saying they are "overrated" because you still need to think about how, what, and why you are testing, I agree.

If you are using your post as an excuse for not using automated testing at all, I completely disagree. That's the bad kind of developer laziness.

On the other hand, I do have to concede that when competing against people who don't use unit testing on the open market, I come off looking like a wizard in terms of what I can accomplish in a reasonable period of time and the sorts of things I can do (successful major changes to large existing code bases you wouldn't even dream of starting), so maybe I shouldn't try so hard to encourage others to use them sensibly. So, I mean, yeah, totally overrated. Have I also mentioned how overrated syntax highlighting is? You should totally just shut it off. Also, fixing compiler warnings are for wusses, and what moron keeps putting -Werr in compilers?


How much of that complexity is self-inflicted? Most of the unit testing advocates I know are also the worst architecture astronauts.

Every line in a codebase has a cost, including tests. I'd rather deal with a code base that's as trim as possible.

I've done unit tests before, but I don't find that they help that much, because they don't solve the most common source of actual production issues: things you didn't think of.


I find they do help there. Having unit tests makes me trust my code better. Confronted with a "it does not behave as I would expect" issue, that trust helps me focus attention away from the implementation of those functions.

Problem with that is that, to get that trust, I need to know that unit tests exist, and, preferably have spent time writing or reading them. Question then is whether that time would not be spent better on reading the existing code. I think that, often, the answer to that is "no", but I cannot really argue that.

Perhaps, it is because writing unit tests puts you explicitly in "break this code (that may not have been written) mode". Writing a unit test that calls a function with some invalid arguments and verifies that it throws is often simpler than reading the code to verify that. Also, unit tests may help in the presence of bug foxes and/or changing requirements. Bug report/Requirements change => code change => unit tests break => free reminder "oops, if we change that, feature X will break".


How do you then know that everything works fine when you do large scale refactoring? Test everything manually? (genuine question, not trying to be snarky).


He doesn't. And I'm not being snarky either. People will say they do, but they don't have any assurance of it. And furthermore, over time they'll learn to stop making these sorts of changes because they don't work, become very cynical about what can be done, and internalize the limitations of not using testing as the limitations of programming itself.

And then these people will be very surprised when I pull off a fairly large-scale invasive refactoring successfully, and deliver product no engineer thought possible.

I'm not hypothesizing; this has been my career path over the past five years, and I have names and faces of the cynical people I'm referring too. You can not do the things I do without testing support. I know you can't, because multiple people who have more raw intelligence than I try and fail.

It is equally true you can't be blind about dogma, 100% coverage being a particularly common bugaboo, but I completely reject the idea that the correct amount of automated testing is zero for any non-trivial project.


I get the impression that the code base on which you pulled off the "large-scale invasive refactoring" was not initially under test, else why would the cynical engineers think it could not be done. So did you have to bring the legacy code under test first?


Yes.


I'm curious as to what exactly you mean. Can you give some examples? If your're frequently making large-scale changes, I'd spend more time worrying about why you're having such a hard time nailing the requirements down.


If you've only worked on projects with nailed-down requirements, you're probably not working on the sorts of projects most HN people face. The requirements change because the world changes, or our understanding of it. That's the nature of the startup. Stable codebases serving stable needs don't need as much refactoring, that's true. And in those cases units might be a waste of time. But for those of us (the majority, I'd wager, at least around here) who work on fast-moving, highly speculative projects, they are an absolute godsend.


A framework previously designed to work on a single device was ripped apart and several key elements were made to run over a network remotely instead. (That may sound trivial in a sentence, but if anyone ever asks you to do this, you should be very concerned.) The framework was never designed to do this (in fact I dignify it with the term "framework"), and tight coupling and global variables were used throughout. This was not a multi-10-million line behemoth, but it was the result of at least a good man-century of work.

As mentioned in my other post, first I had to bring it under test as is, then de-globalize a lot of things, then run the various bits across the network. Also testing the network application. Also, by the way, releases were still being made and many (though not all) of the intermediate stages needed to still be functional as single devices, and also we desire the system to be as reverse-compatible as possible across versions now spanning over a year of releases. (You do not want to be manually testing that your network server is still compatible with ~15 previous versions of the client.) And there's still many other cases I'm not even going into here where testing was critical.

The task I'm currently working on is taking a configuration API that has ~20,000 existing references to it that is currently effectively in "immediate mode" (changes occur instantly) and turning into something that can be managed transactionally (along with a set of other features) without having to individually audit each of those 20,000 references. Again, I had to start by putting the original code under a microscope, testing it (including bug-for-bug compatibility), then incrementally working out the features I needed and testing them as I go. The new code needs to be as behavior-similar as possible, because history has shown small deviations cause a fine spray of subtle bugs that are really difficult to catch in QA.

I could not do this without automated testing. (Perhaps somebody else could who is way smarter, but I have my doubts.) The tests have already caught so many things. Also, my first approach turned out wrong so I had to take another, but was fortunately able to carry the tests over, because the tests were testing behavior and not implementation. (Also it was the act of writing those tests that revealed the performance issues before the code shipped.)

This isn't a matter of large-scale requirement changes on a given project. This is a matter of wanting to take an existing code base and add new features that nobody thought of when the foundation of the code was being laid down 5-7 years ago. (In fact, had they tried to put this stuff in at the time it would have all turned out to be a YAGNI violation and would have been wrong anyhow.) Also, per your comment in another close-by thread, the foundation was all laid down prior to my employment... not that that would have changed anything.

The assumption that large-scale changes could only come from changing requirements is sort of what I was getting at when I was talking about how the limitations of not-using-testing can end up internalized as the limitations of programming itself.

Might I also just say one more time that testing can indeed be used very stupidly, and tests with net negative value can be very easily written. I understand where some opposition can come from, and I mean that perfectly straight. It is a skill that must be learned, and I am still learning. (For example: Code duplication in tests is just as evil as it is in real code. One recurring pattern I have for testing is a huge pile of data at the top, and a smaller loop at the bottom that drives the test. For instance, testing your user permissions system this way is great; you lay out what your users are, what the queries are, and what the result should be in a big data structure, then just loop through and assert they are equal. Do not type the entire thing out manually.) But it is so worth it.


That's an amazing story.

Few questions:

1) How many lines of code is in that man-century project? Is the number of lines of code ~proportional to the number of man hours, or lines(man-hours) function is ~ logarithmic?

2) How does your typical project look like (or how does that project look like) in terms of testing vs coding? Do you spend few months of covering old code by tests and only then start testing? Or you do "add tests - add features - add tests - add features - ..." cycle?

What's the proportion between time spent on writing tests and writing code?

3) What's the proportion of time you spend directly working (analyzing requirements/testing/writing code) and generally learning (books, HN, etc.)?

4) Do you do most of the work yourself or you mostly leading your team?

5) How do you pick your projects, and when you pick them - what are your relationships with the clients: Fixed contract? Hourly contract? Employment?

Thanks!


So, at the end of the day, you never actually did design-from-scratch work, and instead used tests to verify incremental design improvements (key part: verify not create)?


New hacker news rule: if you haven't done at least 80% of what he's talking about, you can't dick-measuring-contest him.

From scratch work is the easier part of programming.


I've done this before, friend. Starting from scratch is indeed easier.

The point I was making was that he used unit tests to confirm his design (as a safety net) and not as a primary design tool.


Starting from scratch does not take into account all the growing pains the previous software hat that made it into the quagmire you have learned to hate.


The sum total of the improvements were not incremental. Testing helped give me a more incremental path, but from the outside you would not have perceived them as such.


I don't do large-scale refactoring. Seriously. Small pieces? Sure.

But I've never, in 15 years of development, had to rewrite half of an application I've already written.

Spending a large amount of extra time and energy, things I don't have an excess of to begin with, for a "might" or a "maybe" seems like a rather poor choice to me.


I agree 100% ;) I'd have to start learning new stuff that i don't necessarily want to get into yet.!


Aside from the unit tests pb, suboptimal but easy to test code is critical in a lot of situations, like time critical bug fixes or last minute feature addition on a production site.

Most of the time, testing takes more time than writing the code, so throwing optimality under the bus can be the best choice. If it's Good Enough nobody's going to rewrite, but I wouldn't see it as something inherently negative or shameful, it's just a question of priorities.


On the other hand, if your code is full of architectural compromises, special cases and privilege escalation tricks just to allow you to test everything in some particular way, maybe the tail is wagging the dog?

There are many ways we try to improve code quality and make sure we get it right. Automated test suites are only one of them. Software design needs to take multiple factors into account, and letting one of them arbitrarily dominate all others is a dangerous path to take.


If I have a function/module/method buried deeply inside my system such that testing it requires either ten lines of setup code or backdoors ("special cases and privilege escalation tricks") in the deployed code, that might say something interesting about my architecture in either case. Is the code really only ever going to be called from that one place and in that one way, and if so, exactly how valuable is it? Sure, it might be that the only place I currently want to call (say) a weighted modulo 11 checksum is in credit card validation and the context there is I have a third-party payment gateway and a valid order object and all that stuff, but I would still be looking at surfacing the actual calculation in a library module somewhere that I can test it without doing all this setup. I grant you that architecture is only ever easy in retrospect - that's why we refactor - but I don't think that represents an architectural compromise.


If all your algorithms are as trivial as calculating a weighted modulo 11 checksum, then the sort of case I'm thinking of doesn't apply. However, in real code, we sometimes have to model situations and solve problems that are inherently complex. The algorithms and data structures we work with will necessarily reflect that essential complexity, and ultimately so will our code.

Beyond a certain point, I think automated tests that give simple yes/no answers are no longer a particularly effective way to test certain types of complex algorithm. Sometimes there are just too many possible inputs and interactions between different effects to get a sensible level of coverage and draw any useful conclusions from that kind of testing alone. You might still have some automated tests, but they are more like integration tests than unit tests at that point.

Internally, you could try writing almost-unit-tests for the implementation details, but then you get into the usual concerns about backdoor access and tying tests too closely to implementation details that might change frequently. Alternatively, some form of careful algorithm design with systematic formal proof might be called for. Maybe instrumenting the code and checking the actual values at key points will highlight errors that aren't yet manifesting as faults, things that a boolean automated test would miss because they haven't violated some arbitrary threshold but which form an unexpected pattern to a knowledgable human observer. However, in these cases, you really want the code to be as simple as possible, and hooks to permit internal access to run some automated test cases as well could cause an awful lot of clutter.


> If all your algorithms are as trivial as calculating a weighted modulo 11 checksum, then the sort of case I'm thinking of doesn't apply.

My estimate is that 98% of all programming everywhere is as algorithmically trivial as calculating a weighted modulo 11 checksum - probably more so - and it acquires its bugginess from accidental complexity due to poor factoring, and from conflicts at interfaces. Test-driven development is pretty good, in my experience, at helping ameliorate both these problems.

Of course, that doesn't mean I actually do it 100% or even 80% of the time. I'm happy to agree that it's no panacea: testing threads and UIs are particular pain points for me, and usually I substitute with either Thinking Really Hard or just Not Changing Stuff As Much

Formal proof for me is stuff I learnt at college, forgot subsequently, and keep meaning to reread up on. Thank you for prompting it back up my TODO list


> My estimate is that 98% of all programming everywhere is as algorithmically trivial as calculating a weighted modulo 11 checksum - probably more so - and it acquires its bugginess from accidental complexity due to poor factoring, and from conflicts at interfaces.

I think it depends a lot on your field.

If you're working in a field that is mostly databases and UI code, with a typical schema and most user interaction done via forms and maybe the occasional dashboard-type graphic, then 98% might even be conservative.

On the other hand, if you're doing some serious data munging within your code, 98% could be off by an order of magnitude. That work might be number crunching in the core of a mathematical modelling application, more advanced UI such as parsing a written language or rendering a complex visualisation, other I/O with non-trivial data processing like encryption, compression or multimedia encoding, and no doubt many other fields too.

Generalising from one person's individual experience is always dangerous in programming. I've noticed that developers who come from the DB/business apps world often underestimate how many other programming fields there are. Meanwhile, programmers who delight in mathematical intricacies and low-level hackery often forget that most widely-used practical applications, at least outside of embedded code, are basically a database with some sort of UI on top. And no, the irony that I have just generalised from my own personal experience is not lost on me. :-)

This can lead to awkward situations where practical problems that are faced all the time by one group are casually dismissed by another group as a situation you should never be in that is obviously due to some sort of bad design or newbie programmer error. I'm pretty sure a lot of the more-heat-than-light discussions that surround controversial processes like TDD ultimately come down to people with very different backgrounds making very different assumptions.


"Writing tests makes you think". A developer should already be thinking about these things when they are writing their code.


Sure, but context shapes behavior. People should be eating better too, but that's a lot easier to do when you have a fridge full of vegetables than a cupboard full of Doritos. Test-driven development forces me to think about code from the outside first.


My original disagreement was more along the lines of "unit testing is more important for design than for QA" and less "unit testing is important".

I certainly support unit testing, as its essential--and anyone telling you otherwise is bonkers--to ensuring that code follows contract.

That said, if unit testing was great for design but didn't spot errors, it'd be useless. Whereas, if it was useless for design and good for errors, that's okay, because I can do the design work myself.


Redesign and refactoring is often related to quality, aren't they?


I was in the 'testing is too much overhead' crowd for years until one day I finally got it. I realized that as I code, I'm always testing. Who doesn't make a change and then test it? So, you consider writing a test too much overhead? How much overhead is it to manually test? How much overhead is it to fill out that registration form you're testing? Maybe there are two or three steps to it. How much time does that take each and every time you test? Being one that enjoys automating repetitive tasks, writing that test _once_ suddenly became a no-brainer.

This realization only made all the other arguments for testing that much stronger.


True story: I took over for a developer working on a large and complex multi-step form. I wasn't surprised that there were a few little bugs in it, but I noticed that the number and severity of the bugs increased as you went through the form. The first step was pretty much bug free, but the final step was completely broken.

Many people who claim that test automation is "too much overhead" either don't understand what test automation is or don't understand what overhead is. If you have to test everything in order to change anything you either have a huge manual testing overhead or have a huge quality liability.


There's a lot of truth to this. I work in a project with a lot of separate assemblies, because we have many applications that share similar functionality. Unit tests are critical to making sure I don't break something for one application while making a change for another. However at the same time, It can be a real pain to load the entire application to test it. Unit tests have actually increased my productivity in some areas where doing a test has about a 1-5 minute overhead (compiling than loading a file etc)


I don't believe anybody that says they test all functionality. Most? Sure. All? No way. Not in a non-trivial codebase.

Article about the group that writes the space shuttle software, sort of relevant?: http://www.fastcompany.com/magazine/06/writestuff.html


The trouble is that the poll doesn't have a middle ground between "all functionality" and "a few critical things".

A full run of our test suite literally takes months on a cluster of hundreds of CPUs (obviously, there are also faster versions of the tests which are run frequently). While I have a long list of additional test coverage that I would like to add, what we test is much closer to "all functionality" than it is to "a few critical things".


Would you be able to share what type of software it is? I'm curious what takes that long to run


I'm also working on some software which tests a lot of functionality, not just 'a few critical things' but certainly not 'all functionality' either.

I'd say that a lot of good responses would have been in between those two.


Agreed. Missing the option of "Most functionality", or "All functionality within reason". Without that option anything I select would be misleading.

I think it's safe to assume that anyone who selected "All functionality" actually means "Most functionality". Also I think we can assume that a good proportion of people who selected "A few critical" would belong in the "Most" bucket.


Well, there's `all` and there's virtually all. All is 100% branch and statement coverage, and is a big waste of time.

When I saw my codebase has 'all' functionality tested, I mean we don't commit code without tests included too. I think that's a pretty reasonable definition.


100% branch and statement coverage doesn't begin to cover "all". Consider:

  double sin(double x) { return x; }
Simply testing x = 0 gives you 100% branch and statement coverage, but I don't think you want to ship just yet =)


That's not entirely fair - you haven't tested the branches or statements inside the sin function.

However, even if the sin function is already tested elsewhere, you will still need further testing to ensure that you are calling it correctly (e.g. not confusing degrees and radians).

EDIT: Yes, I read it wrong - clearly need coffee...


The original poster gave an implementation of sin(), not a unit test. That implementation has no branches in the source and, for any decent compiler, will not have any branches on the machine, either.


Perhaps you need to read it a little more closely? For the given function, he has indeed tested all the branches and statements.


Also, it's the simplest thing that can possibly work! (And for very small values of x is also the best possible implementation.)


Even better: it's a correctly-rounded implementation for nearly half of the input space!


yep, you're absolutely right. You can get 100% testing coverage when you define it as "percentage of code executed when tests are run".

That said... that kind of coverage isn't quite as useless as it might seem. If your tests do execute every line, even in a completely contrived way, you will catch a lot if you change your code. You just tend to catch more of the "wrong number of arguments passed to a method" kind of error than "you are allowing the autopilot to try to land the plane 100 feet below the runway" kind of error ;)


Careful though with tests that literally execute every line of code: You tie your test to your implementation. That makes even the slightest refactoring difficult. Better to have unit tests that only care about the functional interface.


Depends how you define functionality I guess. If you are talking about high-level user functions (create a new user, modify user, delete user), then lots of organisations probably do have tests for all functions.

However, if you consider functions on the code level (e.g. Java methods) then organisations with 100% coverage will be thin on the ground. If you go further and consider line coverage, almost nobody will have 100% coverage.

A common problem with organisations claiming to test all functions is that they will only test the happy path - there will be few tests for things like unexpected or illegal input etc.


Sure, but "All critical, most important, and lots of trivial" wasn't an option.


Unfortunately our sales people are obsessed with agreeing to whatever customers dictate in order to make a sale. The customer wants a full featured, fully customized, fully automated E-commerce solution and they want it for a flat $5000? Sold. Customer says "What is this 'testing' sh*t on the quote? It should just work the first time, or do you only have a Jr developer on staff who needs everything double checked for them? We can go some place more professional" and sales person replies "Oh yeah, that - you're right. Our developer is a wizard and I forgot to take that off."

No matter how many times I explain or quote higher or tell them the feature creep is becoming unreasonable (oh by the way, we have 18 products with complicated interactions, not the 3 we asked for on the quote, but we expect to still pay the same), such that I can't possibly write it all and test it all, they just don't listen and they leave me holding the bag. So, while I'd like to do testing, just getting the thing kind-of working isn't in the budget, never mind getting it working well.

Sorry for the rant and... come to think of it, it may be time for a new job.


I'm not a testing fanatic, but I do TDD. I don't put "testing" on the invoice any more than I would put "typing" or "refactoring." That's internal to my process of delivering quality software, and either you like my estimates/deliveries or you don't.

But... it does sound like you need a new job.


Do not include testing into bill. It is quite confusing for somebody outside of software development and I find their comments about jr. developers quite reasonable. Just put number of hours or amount to cover development and writing tests.


I don't understand why you would include testing in the invoice separate from development. It makes it seem like an activity that could be eliminated if need be.

But, are you really going to deliver code that you developed without testing it in some way? You might not be writing test code, but I'd bet you are still doing testing of some other kind.


We actually made a company to do other people's testing: http://CircleCI.com. Really easy Continuous Integration for web apps. Email paul@circleci.com for a beta invite.

That said, I subscribe to the philosophy that testing is only there to support the business, not and end in itself. We often prototype features with no testing at all, because they get rewritten 3 times anyway. Often, writing the tests is what highlights flaws in our logic, so without it we would often we flying blind.

Testing slows down coding by about 135% (yes, more than twice as slow), but makes that time back in spades when you have to work on the same code again, or when changing lower layers (models, libraries, etc).


I think the response anyone is likely to give to this poll depends a lot on the kind of work they do.

When I write a software package/library, I'll usually test the hell out of it for the very same reason so many others have given: if you're testing in a REPL anyway, why not just turn those snippets into unit tests? Hardly any effort.

But I usually don't bother with too much automated testing for websites or web apps, because (1) it's more difficult to actually catch the errors you care about, have good test coverage and keep tests up to date than it is for back-end stuff and (2) I actually like clicking through my app for a while after I've implemented a new feature or changed an existing one.

Manually testing a web app allows you to catch many different kinds of mistakes at the same time. Almost like an artist looking at an unfinished painting. Does the UI look off? Does X get annoying after doing it ten times in a row? Does everything flow nicely? What is this page missing? Did that customer's feature request you got three days ago actually make sense? Questions you should be asking anyway, even with automated tests. And basic functionality is tested because the underlying packages are tested.

... but then again, if I was writing a website backed by a RESTful API, testing that API is as easy as doing a couple of HTTP requests and checking the responses, so you'd be stupid not to go for that quick win.

So my answer is "We have a test suite that tests all functionality" and "Tests? We don't need no stinking tests." at the same time.


People ... don't have tests? o_O In 2012?

I am seriously considering putting together a "Software Engineering for Small Teams" course or set of articles. With a little bit of expertise, you can inject testing in to most projects, use the minimum of Agile that'll help, and generally massively raise your game - and by that I mean code faster, better, and more reliably, with considerably less stress.

(edited: turns out I forgot which year we're in :-P)


I think it all depends.

I used to always write proper full-fledged tests. Then I started my startup, building a product in the few hours left after a demanding high-stress job and a tumultuous private life.

Within a few weeks, I stopped writing tests. Within a few more weeks, I turned off the test suite.

I wrote the product, got it working, received market feedback, realized my model was all wrong, rewrote the entire domain model and UI multiple times all to finally realize that my component boundaries were all wrong and intuitively understanding where they should've been.

Now I feel confident about an architecture that will stay stable for 12+ months and each new component I write is properly tested.

In the meanwhile my lack of tests is starting to bite me very slowly, but I find that I'm just slowly replacing all 'bad parts' with properly tested components with clearly defined boundaries, rather than changing existing code.

And in the end I'm really happy that I decided not to test as much. It has it's place but when your time is really precious and you're trying to mold your software to fit the market needs, it just isn't worth it.

I don't know how many others are in a similar situation but, for me, sometimes it just ain't f*ing worth it.


I'd been working for years in a workplace that tests virtually everything up front until I joined a startup, and I agree with you.

Experimental features may be very short-lived, or require extensive tweaks, and the technical debt that accumulates from not testing may never arise over their lifetime. Once you're sure it's going to stick around forever, do it right and cover it with tests.


I'm doing a startup as well, and we do a fair bit of testing.

One of the keys to making that work for us is a short feedback loop. We automatically release on every commit, which means every couple of hours. Speculative features get minimally implemented; if they look good then we beef them up more. Our goal is to avoid not just the unneeded tests, but the unneeded feature code too.

I'm personally pretty happy with the testing in that we don't have to spend much time on debugging or manual testing. It's very nice to make a major change, poke at it a little bit, and then ship it with a fair bit of confidence that it will work.


Those are fair points all, and I feel cover similar ground to an article of mine:

http://www.writemoretests.com/2011/09/test-driven-developmen...

I'm always amazed by how well the whole 'technical debt' analogy holds up. Yes, leveraged development at the beginning is fast, and sometimes a good idea for getting to MVP. But the cost is still there, and will become apparent, and needs dealing with.


I'm a lot less surprised. Not everyone gets to work on a shiny new codebase which was created after regular testing was the norm. A lot of us work on maintaining code that's 5/10/20 years old, and have to worry about things like maintaining and adding functionality over refactoring the entire codebase to support unit tests.

When you're in this position, your going to get more value out of creating a smaller set of functional integration tests that cover the critical functions of the project. Sure, adding new tests as you add functionality is a good idea, but it's not going to result in total coverage for a very long time.


Who's talking about adding all?

My area of expertise is adding tests to legacy codebases. Obviously you're not going to hit the whole thing overnight. But that's no excuse for not having /any/


The very first thing I do when I take over a codebase is to write tests. Without tests, it's impossible to do maintenance work or add functionality in any sort of rigorous fashion--how can you know that your assumptions about how the code works are correct? How can you know that your trivial change didn't break something?

Of course, tests don't actually tell you these things. But they can tell you that your assumptions were wrong, or that your trivial change broke feature xxx, and that's crucial information to have.


Do you always have the time/bandwidth to write these tests? I'm curious what you might do if an old codebase lands in your lap and someone says "here, fix these bugs by the impossible_length_of_time."

I appreciate the idea here, and I've done the same in certain circumstances, but typically that means writing tests for bits of functionality that I need to touch.


Saying something's feasible if you throw the tests out just doesn't make sense to me.

Without good automated, easy-to-run tests, you're going to blow more time fixing bugs and ad hoc testing in the long run.

Thinking you save time by not testing is a lie on all but the most trivial of projects.


If it's impossible, then the diligent engineer says so. Projects are often doomed by people with "can-do" attitudes attempting to achieve the impossible.


Do you always have the time/bandwidth to write these tests?

Does an ER surgeon always have the time/bandwidth to scrub hands before surgery?


Does an ER surgeon always have the time/bandwidth to scrub hands before surgery?

Does it potentially take the surgeon several days to scrub before an emergency surgery?

Edited to add: I appreciate the analogy, but it's flawed. If someone comes to me and says "here, developer A is on holiday, and we have this bug that it causing massive disruption in the field," is it appropriate for me to say "well, I can do that, but it will likely take me five days so I can understand the codebase and write the appropriate unit test suite."

This is circumstance I thinking about, not necessarily inheriting a codebase and having to add features to it. In that case, certainly, I'm going to take my time, read the code, and write tests.


In this scenario, there should be tests already present covering developer A's portion of the codebase, together with documentation on how to run them (though tests should be as self-explanatory as possible).

In fairness, I recognize that this isn't always the case in the real world. Sometimes you really do need to just blindly attempt to fix something, and there's nothing to be done about it. But it should never become a regular occurrence, and you should never get comfortable doing it. First thing I would do is tell my manager exactly why I'm uncomfortable, and what a conservative assessment of the risk is. If we decide to go ahead with the change anyway, I would create two new entries in the bug tracking system, which should be developer A's top priorities as soon as she returns: thoroughly vet my changes, and DEVELOP A SET OF TESTS.


I see exactly where you're coming from, and I'm there all the time.

It just troubles me that people are so often willing (and eager!) to waste a lot of time doing half-assed manual testing when they claim not to have any time to write tests. Especially when the state of the art in test automation is better than it has ever been.

This has me thinking that the importance of test automation is related to the proposed frequency of changes. If someone wants a one-off change for something this very second I'll just change it. If someone wants me to inhabit a codebase for any length of time, I'll always set up tests for it. The problem is where you can't tell the difference between those two scenarios until it's too late.


Let's say you don't have the time to give it the complete understand-and-write-unit-test-suite approach though.

How are you verifying you fixed the bug otherwise? By changing some code, building the app, and running it to verify the bad behavior doesn't happen anymore? I don't really see how not writing a unit test (assuming the code is unit-testable in the first place) saves you any time. You are doing testing anyhow.

And if it was a critical bug, personally I'd want to feel as confident as possible that I fixed all permutations of it.


frobozz nailed it, really. If something can't be done, it is the engineer's responsibility to make that known to his manager, who is responsible for communicating that to whoever is asking for the work.


Of course working on a code base without a majority test coverage is dodgy (and intellectually frustrating), but it's a necessary skill.

I feel that it is unreasonable to expect that you will be able to pick up any code base and immediately write sufficient tests to get coverage on a majority of the code base. Speaking from my experience picking up old code bases, just being able to write isolated unit tests would require refactoring most of the code base, which is typically not something you will have time to do before you're expected to do other work.

I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."


> I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."

I can't think of a single developer I've worked with who would try that approach.

When a bug is identified in a project with few-or-no tests, the approach that I usually see taken is to write some sort of large, slow integration test that exercises bug, then fix that. That allows you to prove that the bug exists and prove that the fix fixes it, at least for the documented case(s).

There's no reason to cover an entire legacy code base with tests if you're only changing a small portion of it.


It depends what you're working on. If you've got a project that has to grow fast, adding more features than fixing bugs, not even knowing what you're going to keep in a few months, the time spent fixing regression bugs due to lack of tests I think is relatively little.

You may say "I'll thank myself later", but in this sort of business there won't be a later if we're not fast enough. It's a lesser of evils thing.

It would be nice if testing were a faster thing to do. The faster you can do it, the lower the threshold would be for this sort of a judgement call.


I think you'd find out most significant software has some kind of testing, but if you inherit a quarter mil lines of code, need to make one focused change, making the case to spend six months to write full coverage just does not get funded.


This is a fantastic idea - a lot of 'small teams' think that they are too small for extensive tests, or don't know how to organize themselves effectively.


When making a business, you have to continually make tradeoffs. Do I work on new customer features, do I work on customer acquisition features, do I fix bugs in old features, etc. Testing has value, but it often doesn't have the highest value. I totally agree about raising your game, but I can see how young startups especially race ahead without them (often to have them crash down on them 3 months later)


Sometimes I'll forget what year it is too, or maybe that was just a typo. But my OCD just isn't letting me let this slide...

*2012


Even more amazing... it's 2012!


The day I changed one line of code and 100+ tests failed was the day I really got it.


Although that could just be a sign of brittle tests....


I've never done automated testing, but as I've grown as a developer and started dealing with more complicated codebases, I have come to see the importance of testing in a huge way.

With a small codebase that you know every inch of, its easy to test most of your interactions before you push something live, but when you get just one order of magnitude higher you start seeing how easy it is to write code in one section of your app, test it rigorously, but not catch some subtle breakage in another (seemingly unrelated) section of your app.

In production software, especially if you have paying clients, this is simply unacceptable; which is why I've recently been boning up on BDD, TDD, and continuous integration and am trying very hard to slowly integrate them into my development process.

To one of the comments before, in my experience, automated testing should actually makes you bolder with code not more fearful. We have this codebase where I work that is a frickin mammoth of interrelated modules and its so scary to go in there and add or change something, because I just know something else is going to break and I'm going to be stuck fixing it for days after I made the first edit.

This is the other reason I started exploring automated tests ... because I realized that if I had a test suite that could catch regressions when I refactor code, then I could actually spend more time whipping old code into shape instead of patching it up until such a time when I'd be able to just rewrite the whole thing.


Trapping regressions is a HUGE driver for testing for me.


I do test almost anything in my apps and I can't imagine to write my software without it nowadays. I test my Ruby code in the backend, the CoffeeScript code in the frontend and I have integration tests to verify that the whole stack works fine.

It took me a lot of effort to learn it properly, I have read many books about testing, have read the tests of plenty of open source software to see how others do it and I wrote thousands of wrong tests until I got at a stage where I can say I have mastered testing.

I was always fascinated about test driven development, but to be honest, it does not work for me and I seldom do it. In most cases I normally write new functionality, then I describe the behavior of it and finally do some refactoring until the code quality meet my needs. When you can refactor a class without breaking a single test, you know you've done it right.

It's important that you find your way and don't try to follow the rules from others. Take your time, mastering software testing is a complex discipline and it won't happen overnight.

Even with a high level of test coverage, I always encounter new bugs when using the software. But after fixing it and adding some tests, I know at least that I will not see the exact same bug again.

I believe that writing tests speeds up my development. This may seems illogical at first, but without the tests my development would slow down with increasing complexity (Lehman's Law), and instead of adding new functionality I'd find myself fixing old stuff. So testing allows me to manage a large and complex codebase, it allows me to do a complicated architectural refactoring and I know everything important still works as expected.


I do the testcases based on where the project is at that point in time. Here are the three stages, that can help you decide how much tests needs to be there.

[1] Initial stage where we are trying to make things work. At this stage code base is very small < 1000 lines. This is like prototyping. It works with limited functionality. No tests needed at this time.

[2] Heavy development phase. At this stage, we have proved the concept. Now we are adding a lot of new features. We identified some features as must have. Also, code is getting re-factored/re-designed based on what we learn. At this stage, we add tests for the must have functionality. Thus, we can ensure that important features are not broken by newer code.

[3] Mature phase. The code is mature. Most of the features are working fine. Code base may be large 100000+ lines. At this stage re-factoring/re-designing is not easy. Mostly incremental changes are happening. At this point, we should have upwards of 70% code coverage. Typically, the test code will be more than the code when we have 70%+ code coverage. But, it is very important to have tests, since it ensures that all features are tested even when a minor code change is made.


Where's the option for "We thoroughly and immediately test every change (and all affected processes) ourselves to also ensure UX is top notch"?


This. I basically do this because it's quick and does not need any extra code, esp because I write web apps and testing it is just a ctrl+S away. No compilation, just F5.


Honest question: Do you believe that test == automated test?


WOW! I must say that I am actually surprised how many people have replied that they do little or no testing.

Perhaps this is because I am in the enterprise development world as opposed to the start-up world.

The cost and frustration involved in delivering a critical bug into a QA or production environment is much higher than the cost and frustration of writing and maintaining tests.

Every action in business has a cost associated with it. The more people involved (customers, UAT, Managers, etc.) the higher the cost. The sooner you can discover the bugs and fix them the less people are impacted the lower the cost.

This is how you make yourself as a developer more valuable and justify your high salary/rate by ingraining habits into your daily routine that reduce costs for the business.

In this I also imply non monetary costs, like the personal costs involved in asking a VP to sign off on an off-cycle production release due to a bug that could have been identified by a test prior to the integration build.


In my experience, on projects with often-run automated unit test suites with good coverage, development goes faster. Part of this might be because for code to be highly testable, it usually also has to be well-designed and architecturally sound.


I agree. When interviewing I can usually weed out those who write tests (and write good tests) from those who just claim they do.

How?

People who don't really write tests will tell me that the advantage of unit testing is being able to see when code changes have broken stuff (which is fair enough and true).

Those who regularly write unit tests will probably bring this up- but often their first point will be 'It helps to structure code properly, make me think about dependencies, modularise code appropriately'


You are mixing up automated Unit Testing with TDD. They overlap a lot but they are not the same.

There are people who could write quality software with good test coverage without following TDD style.


I used to get code back from developers EACH AND EVERY TIME with massive bugs like: unable to register, unable to login, unable to add content. I wrongly assumed that they at least ran through and checked for any bugs they introduced before sending me the new code. So each and every time I got code back I had to go through manually and check it, sign in, log out, register, add content, delete content, edit content, add category, etc...

I wish someone could make a simple service that allows me to set up my web app, set up test parameters that it tests each and every time, and tell me if it failed or not. I want to automate my babysitting.


Please don't fix this with a technical solution.

There is some reason that your developers aren't engaged in the work. Figure out why they don't care about working code or the user experience and fix that.

If you plug the obvious holes, you won't have fixed your quality problems; you'll just shift them to the places where you won't notice them right away.


Please don't fix this with a technical solution.

I can't agree more with wpietri above. There surely is a non-technical problem at play. That isn't to say that you shouldn't try to automate your babysitting, but if your need for babysitting is that severe, you have other problems.


Thank you both. I will not forget these pieces of advice.


I'll just leave this here. http://saucelabs.com/


Selenium is not the prettiest tool out there, but as the homepage says, Selenium automates browsers, making it ideal for running tests that you describe:

http://seleniumhq.org/


I think https://stillalive.com/ will do what you want.

Their landing page is a little weak but I dig their UI for setting up tests.


I think you could use something like like selenium? Should be fairly easy to test the basic things you would normally do manually.


that's exactly what we offer at http://testingbot.com

you can upload a bunch of Selenium tests, indicate when you want to run them and we'll send alerts if a test fails.


You should check out saucelabs.com



We would like to test a lot more but I really don't know how to test some of the critical stuff.

Just as an example, how do you test a parser that processes large amounts of sometimes sloppy semi structured text? Whether a particular defect should be classified as a bug in my parser or as a rare glitch in the source data is undecidable until I know how often the defect occurs.

What I need is a kind of heuristic test framework that makes sure the parser doesn't miss any large chunks that I only find out about weeks later if at all. I cannot supply individual test cases for everything that could possibly be found in the source data.


I cannot supply individual test cases for everything that could possibly be found in the source data.

Perhaps not, but you can supply test cases for known problems you might encounter, as well as ones you've solved after they've been encountered.


Yes, that's what I'm doing, but I feel it's a drop in the bucket.


Also don't forget the tests you add help you with the regression tests. The large set of tests would assure you that the new fix you do will not lead to any other bugs that you had fixed earlier.


It is, but as bug crop up you can add tests to ensure they don't crop up again. While it's not possible to ensure perfection, it does help ensure you don't 'revert back' to past problems.


I don't test as much as I probably should, because it seems cumbersome since I am mostly dealing with APIs like Facebook. For example, if a user revokes their Facebook OAuth app token they get an email notification about that from me, informing them that the app will no longer be able to function because of the expired token.

I am not automatically testing that, perhaps I am missing something, but automating the steps to log in to Facebook and revoke the token and then also making sure that SendGrid sent the email correctly just seem impractical.


You don't want to test external APIs. You probably do want to test how your application behaves in response to using the APIs. One way is to mock the API calls with canned responses. Another way is to use a tool like VCR (https://github.com/myronmarston/vcr) to record and playback API interactions.


Personally, I do end-to-end tests for some basic cases just to make sure everything works together.

But most of the tests are more fine-grained. So in your example, I'd test the core logic against fake Facebook API responses and a fake outgoing email call. That lets me easily test some of the weirder cases. E.g. if Facebook breaks, will the job skip that user and keep going rather than blowing up?


Manual testing has it's value as well. Automated testing isn't always cost-effective or simple.


On my latest project (Rails 3.1) I test thoroughly the back end code, but only in a limited way the CS front end code. I'm using jasmine there, but that is a lot of overhead.


We run tests at Absio (the place I work). Everything is supposed to have full unit test coverage, but with ship-it mode that has slipped a little lately.

When you commit to a personal clone of mainline and push it up to the server Jenkins picks it up, builds it, and runs the tests, and if there are any failures notifies you over Jabber and or email to let you know it is broken and for you to go look at it.

We also integrate Jenkins with JIRA, so as soon as Jenkins builds something, pass/fail if there is a JIRA ID in the commit message a comment is automatically added there as well, which if people are watching the bugs they will get notified about.

This effectively allows people to see how they are coming along in terms of their progress and lets them see when stuff is broken almost instantly. Automated builds are nice because we can distribute the builds across a variety of different environments at the same time to see that if maybe something worked on Mac OS X that it doesn't build on Linux, well that needs to be fixed.

It definitely has made me code more defensively, nobody wants to have your Jenkins build show up as red on the status board, and nobody wants the extra scrutiny on code review when asking to merge something back into mainline. So far it has worked fairly well with most developers doing testing.


Testing? Shoot, we sometimes code in prod!

http://www.bnj.com/cowboy-coding-pink-sombrero/

(article's not mine, but might as well be)


I do automated testing as much as I can, the main thing standing in my way is the problem of testing GUIs. GUI testing frameworks are inevitably painfully slow, don't test the appear of a GUI and don't test things like responsiveness and the varieties of behaviors of user message loop. I'd like to have the ability to test even more.

That said, I think TDD is trendy-consultant-crap. Writing a test before your write the code only works for simplistic code that doesn't need much testing and probably won't produce the right test for your code once you have written the code.

Also, for code I've just written, a ad-hoc manual test using the GUI is often much faster than writing a full test and I likely wouldn't ever need to run those tests again. The test suit takes quite a while to complete and if I could add every manual test I've ever run, it would take absolutely forever.

Something like "Zen Test", which runs the relevant tests in the background on code being changed sounds good but I don't think there's anything like it for c++. I'm a bit doubtful it could work on complex code in any language. A lot of R-and-R magic sounds like its creators never went code involving one model method, one controller method and one view method.



Thanks for the link.

So: what about GUIS? How does one formally describe pixels displayed on a screen in a way that captures their ability to correctly communicate with the human intended to view them? So that my notice

  Please turn off the foo within bar
doesn't show up truncated to say

  Please turn off the foo
on all reasonably applicable screens? This example is just one of a googleplex of possible failures.


Testing isn't easy, but it's also a skill you get better at over time. You get a feel for what you should and should not test. You get quick at writing units. Toughen up. Learn to test, noobs.

How can you refactor safely without tests? You can't. How can you safely upgrade your tools (which often change in subtle ways), without tests? You can't.

"Every programmer knows they should write tests for their code. Few do. The universal response to "Why not?" is "I'm in too much of a hurry." This quickly becomes a vicious cycle- the more pressure you feel, the fewer tests you write. The fewer tests you write, the less productive you are and the less stable your code becomes. The less productive and accurate you are, the more pressure you feel. Programmers burn out from just such cycles. Breaking out requires an outside influence. We found the outside influence we needed in a simple testing framework that lets us do a little testing that makes a big difference."

Quote from: http://junit.sourceforge.net/doc/testinfected/testing.htm


I love testing I just find using it in the right way can be very tricky sometimes. Especially in a team setting where there are weaker members than others.

When you work with a team of people that didn't understand what to test you end up with really bad tests that add very little value. Do you delete those test? Write sane ones?

When you end up with a legacy code base where doing something like functional UI testing is easy but doing unit testing on the actual code is almost impossible, do you even attempt to unit test it?

If you see a piece of code with that must be rewritten, but unit testing it costs too much time, do you simply start writing tests what you think the assumptions were and then just go about with the rewrite?

In the end I see a huge value in testing what you write, and being automated is preferred. My problem becomes picking up something else that was clearly done in a misguided fashion and reliably rewriting or refactoring it. I know there are probably some guides/books out there that demonstrate it so any suggestions are welcome.


For projects I am doing on my own (albeit, these are quite simple, just fun ones) I am doing TDD approach.

It is really natural to me, because part of developing idea is a research. It often includes doing some test of how particular library works, what format of data expected, etc. I always was finding myself doing small isolated programs/scripts to test specific question I have about it. And now it was so natural to start using TDD approach.

As other commenters noted, due migration of much logic to JS (client) side, testing it together with server app might be a challenge. For my particular case I "solved" it by using V8 library. I am developing with Perl language and there is great libraries available on cpan. V8 and pure Perl ones. I am using V8 for performance reasons (doing encryption), but before I used pure Perl JS library and it worked perfectly too.

So if your language of choose have libraries to hook into one of JS libraries, I would highly recommend to try to include JS tests into application tests kit.


I don't always test my code, but when I do I do it in production.

Stay thirsty my friends.


I work in science. We agree that testing would be beneficial, but nobody codes well enough to actually get it done.

To all language designers, there is a HUGE space for a better scientific language. Make it easy for Matlab users to understand, but include better encapsulation and library support. Tie in testing and proving from the core.


You might want to check out Julia[1] programming language. I have absolutely no idea how good or bad is it but considering the fact that it's so young -- you are still able to influence it's development (e.g. suggest better testing capabilities) if you really wanted to.

[1] http://julialang.org/


The webpage is down, so I'll take a look at it later. I'd like to see some simple examples of things as well.

It should be a one-liner to import a CSV file and do a least squares regression on different columns.

It should also be a one liner to open an image, compute it's 2d FFT, and display it.

It should also be one-liners to do numerical quadrature integration, compute the solutions to some ODEs, and maybe even backpropagation training of a neural network with just one layer.


What about NumPy + unittest + all other Python libraries?


NumPy is making pretty strong headway in many communities. It's still not Matlabby enough for the majority, though. It doesn't make a clear improvement, so I think many see it it as just poorly replicating the features of Matlab for free.

It's also pretty notoriously difficult to install, especially if you want LAPACK/BLAS. I wasn't able to get it running on many of our servers for that reason and had to revert to Matlab.


It's also pretty notoriously difficult to install, especially if you want LAPACK/BLAS.

I almost said "no way!" then remembered how difficult it was for me to get Scipy 0.9.0 installed and verified via tests.

Still, it's like Matlab ("it" being Num/Sci/Matplotlib) plus all of Python. That's a significant improvement over Matlab if you can get it installed.


Testing is really the last stage. Few software suites get there. Yes, yes, I know you're supposed to build with it in mind from day 0. And if you do that, you may never get to the finish line. You exert every ounce of energy you have to making a viable product. You worry about everything else afterward.


Nope! You should be writing your tests along with your code. Not "keep it in mind" actually write them. My first step in a new project is `mkdir spec`.


One other beneficial side effects of having an automated test suite is that they come in handy during any profiling one need to do against the code base. Trigger the automated test suite from the profiler and analyze its output to any performance bottlenecks in the codebase.

Also the practice and triggering the of automated tests regularly (continuous integration) and tracking the time it takes for the tests to run helps to detect early in the development cycle if any of the changes made were suboptimal. All environmental factors being equal a new small feature implemented shouldn't drastically increase the time it takes to run the test suite.


For LedgerSMB, one of the really critical problems we run into is that of the legacy codebase. We test some critical things, but the legacy codebase has scoping issues that don't impact normal use in a CGI environment but impact test cases. It's one reason we are getting rid of it.

90% of the testing we do is actually on the stored procedures and the general framework. The reasoning here is that these areas have to work right and therefore we have to get this right all the time. Workflow and the like is more fluid, less easily spec'd out, and the like. Test cases aren't as meaningful there but we do have some.


Where is the "we'd like to do more testing" without the negative excuse option? :)


I write tests today to make sure it works tomorrow. As projects progress, inevitably no matter what kind of ninja coder you are, a requirement that is beyond what you could have imagined will pop up. You can either say, no we can't do that... loosing a competitive advantage. Or you can can code without fear. Because when you're done, you have a full suite of sanity tests waiting to make sure you didn't mess things up. Unit tests can make an average guy like myself appear to be that Ninja coder those job ads are always asking for, the guy with the oakley glasses.


In production; its good to involve your customers in the development process so they feel included.

Seriously though, for small web projects I usually aim for 100% unit-test coverage on the models, 70-80% on controllers, and then depending on the application jasmine or selenium to verify the UI components are happy.

For larger projects, add in more integration tests ( models -> controllers, controller -> views ) and on something like mechanize to do full stack tests ( models -> view ).

Additionally for either small or large, running some sort of lint/static analysis at the CI can be beneficial.


I don't have a problem with the idea of test, as such, but I couldn't use them since I write webapps and what gives us issues isn't the Javascript code (about 80%) of the time.

It is the CSS, or failing that, the interaction between Javascript and CSS, which I haven't seen any way to test automatically (such test would be able to answer 'given this code, does the resulting DOM look like picture $N').

Usually when there is something wrong with the Javascript it blows up in our faces.

So if anybody knows of a testing framework that can do this, please tell me about it.


Jasmine is a behavior-driven development framework for testing your JavaScript code. http://pivotal.github.com/jasmine/


The best type of test depends on the type of software being developed. For the sort of statistical software that I have been involved with, I think that system level tests (with synthetic and/or real data) give tremendous bang for the buck. This is particularly true if the data is high volume, relatively homogeneous (in some sense), and most of the top-level interfaces are fixed fairly early on. Many other projects are not like this, and so may benefit more (proportionally) from different approaches to testing.


the answer to "how much" is aways "it depends".

* tech stacks evolve changing the amount of testing that is needed => most of the stacks allow to only focus on the "meat" of the logic, rather than things like integration (Spring Integration / Camel), network (Netty), cache (Redis) or even data structures (various language built ins).

* human is getting better with years of coding => I spot flaws and mistakes during code reviews N times faster than I did 10 years ago. I code in little pieces (usually functions), which "talk" back to me immediately even before they are finished.

* REPL is getting really good => Clojure, Scala, Ruby, Groovy, etc.. REPLs save lots of time and prevents mistakes: where a 5 minutes REPLay session reveals a nice and polished approach a lot quicker than a "let's try this / now rerun the test" formula.

* Domain knowledge and "'ve done this exact thing before" greatly impact amount of testing needed => e.g. deeper domain knowledge allows for [better] tests, while no domain knowledge requires lots of prototyping (even if you think it is the "real thing" at first, it is not, it's a prototype), and would greatly suffer from a large number of tests, as most of the time will be spent rewriting test instead of learning the domain.

In the end, the rule of thumb I always use is "do whatever makes sense". I don't buy TDD, ADD and other DDs. They are fun to read about, but they are too removed from the "real thing". If any DD term is needed, what I use is MSDD => "Making Sense Driven Development"


Another interesting question: how often do your tests run? Most folks probably run unit tests with continuous integration but what about functional and performance driven tests?


Continuous integration should run all your functional and performance tests if possible. Each "unit" (could be a commit or a push or a merge depending on your philosophy) can cause errors, and being able to pinpoint the unit in which the fail happened is immensely valuable.

If you have something really long running (eg you make a database and have a two week test), then you may be able to minimize your test (possibly automatically) and use git/hg bisecting to find it.

Fuzz testing (finding holes in your code) can be run separately, and again, you can find the root cause through minimization and bisecting.


Most tests (unit, integration, etc.) are triggered when new code is checked in. For other kinds of tests, we use schedule triggers to run them at a particular cadence, either overnight or more frequently if that's what needed.

TeamCity is good for automating with both kinds of "triggers".


Unit tests. Tick.

Integration tests. Tick

Automated acceptance tests. No Tick.

Tried to sell concordian as a framework to support BDD - but that is a hard x-discipline change which would have required more effort to push through. So as a short term measure have started to write/express unit tests using a standard BDD style - GIVEN x WHEN y SHOULD z. This has helped to assign value to each unit test. There is now a direct connection between the test name and acceptance criteria specified in a user story.


You need an option for...

We have a test framework and a devoted team of people dedicated to encouraging the use of said framework but the rest of our engineering staff don't get it.


I just want to say, it's always a nice feeling when I get all-green output from rspec and jenkins. The problem is that tests, like your code, are subject to the laws of entropy that comes from bit-rot.

So, I test things that matter and don't change too often - core business logic.

100% test coverage is just a goal, a bar to aim for.

And I'm totally with Zed Shaw when it comes to TDD - not worth it when you're still trying to get a full understanding of your problem domain.


We need to test more. I've run projects before that had over 1500 automated tests..mostly written by myself, it was beautiful and so simple to make invasive changes.

We have a lot of catch up to do right now, but i think thats what "startups" often do. We will catch up with the tests in the next month or so, at the end of the day I know perfectly well that without them pivoting and making invasive changes will simply be next to impossible.


And how often did you have to rewrite those tests because you were drastically changing the architecture of your code?

Do you think the effort of maintaining all those tests might not have paid off? This question is very tricky to answer.


Not only does testing help with managing large codebases by being able to make actual assertions about certain parts of the code (to be able to prove correctness) but it also improves the quality of your code. If you're writing code that must pass certain tests, you inherently start to think about making that code more modular and de-coupled, ie injecting dependencies rather than creating them for a start.


I don't find an applicable selection for my company.

We write and run so many tests that it is a full time job curating the test suites that should be run prior to code delivery. Basically, if you don;t like writing tests you will be miserable at our shop.

The tricky part is keeping testing standards consistent when you get beyond 30 or 40 developers.

Developers tend to be more opinionated about testing practices than even editor selection and curly brace placement.


I really wish automated testing was significantly better for Java and the ilk. To steal from the Haskell world, I want to augment JUnit/TestNG with Small and QuickCheck.

The tests would go something like this: 1: SmallCheck exhaustively tests the small cases 2: JUnit/TestNG tests the main use-cases. 3: QuickCheck produces a lot of random tests and hammers the APIs.

Sadly (for Java at least) this appears to be a rather difficult ask.


Bottom line is: test code is code and you have to maintain it. If you write code that does not give you anything in return, or gives you more headaches than anything else, you wasted your time when you wrote it.

When you are writing any code, you should try to predict if what you will gain out of it will be worth it. In other words: evaluate the risks of anything you do in your life.


You shouldn't be proud of having 1 gazillion tests if all you do is rewrite/fix regressions on them.


One thing that automated tests do well is repeating bugs that your user finds.

Sometimes it can be tricky (replicating the conditions of their data set comes to mind), but it's quite good for preventing regressions.

That said, they can give you a false sense of security. If your test is wrong, it can allow bugs to slip through the net until your user picks them up at the worst possible time.


I write web apps and I don't do any testing at all. I am also a unit testing newbi. I just run the app and make sure what change I made works. No automated testing what so ever. it just works and I believe it will be an unnecessary over head Is this bad? If yes, how can I unit test my JavaScript?. Plus i always thought UT is for code that compiles, right?


I went through an experience where 2 years ago I thought "I hate unit testing, don't know how to do it and don't see the value". 2 years later I think "I enjoy unit testing, know how to do it well, and see the value in unit testing _most_ of the time".

I believe this transformation is entirely to do with the fact that I paired with a brilliant developer every day for 6 months who really helped to answer all my questions and show me how to test a variety of different things. I truly believe that unit testing (and testing in general) is a hard thing to grasp without being able to learn from someone over a long(ish) period of time.

I realized that I hated testing because I didn't know how to do it and wasn't good at it. I also didn't understand what the essence of a unit-test was; my tests would often cross multiple integration boundaries (ie: hit the db and the server) and were really more like bloated integration tests. Once I had sorted that out and was able to see a variety of techniques for testing specific scenarios I realized that I actually enjoyed testing and the satisfaction of knowing my code was covered against defects started to be a big motivator.

To answer your specific question about JavaScript testing, I've been using JasmineBDD[1] for the last 2 years and have found it a joy to use. It really makes testing things easy and has tools that allow you to isolate your tests down to the individual units.

[1] http://pivotal.github.com/jasmine/


I paired with a brilliant developer every day for 6 months who really helped to answer all my questions

Sounds great. What else did you learn?


Lots! How to test-drive (as opposed to test-after), how to mock out integration points, what mocks/testdoubles/spies are and how they differ. I also learned that Enterprise Java is a particular level of hell. In all, it was a good experience though :)


Look into Jasmine BDD for js testing. We use it for all our js and it has greatly improved quality and reliability.


Here's what I clicked: We have a test suite that tests a few critical things We are happy with the amount of testing we do

Here's what I would have clicked, if present: We have a test suite that test a lot of things, but probably only represents %75 coverage at best. We'd like to do more testing, and we're continually adding more, but the biggest barrier is cultural.


I think TDD at times is overkill, but the core components of any app that others stand on MUST BE TESTED. The deeper your component is, the more critical tests are. Because if code a few levels deep breaks, it is much harder to fix/detect than something on the surface, which is usually immediately visible, immediately obvious, and low risk fixing.


Isn't actually running your program and checking if it works a form of testing?

The term "test suite" seems to refer to formal testing techniques like creating unit tests and the like. I don't do that. But I do test my program on every functionality by running and checking if it does what it's supposed to do. Does that qualify as testing?


Testing is a pain and it takes time. Plus, we don't always even find the bugs. But if you write unit tests or want to start unit testing, Typemock's newest release (released last Monday) makes unit testing easy and finds the bugs for you. http://www.typemock.com


I do simple output and performance testing. I md5 the output produced by our programs and test their run time, memory use, etc. so that when we change the code, we can verify the output is the same and performance is still OK. I try to do some unit testing too, but do not have time to do that as much as I'd like.


I used "and also..." as my second answer, because none of those others applied. My real second answer is "We'd liked to do more testing and we're working on it as fast as we can consistent with producing the new features and products demanded". There's a decade of code that has very little testing, still...


I highly recommend this book "xUnit Test Patterns: Refactoring Test Code" (http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/d...) to anyone who wants to start to use tests on daily basis.


Currently working a webapp with a legacy(2002ish) java code base with a fair amount of testing but it's not even close to full coverage. JMockit has gone a long ways to towards making it easier to expand the test coverage but it's difficult to find time to make significant impact.


We try and do as much TTD as possible. Specification by test is the best way to drive out corner cases.


I have some projects with extremely thorough test suites, and some projects with no automated testing at all.

I find my desire to work on those projects directly proportional to test suite coverage. Once you start writing against automated tests, there's no going back...


I would have liked an intermediate option between "test all" and "test a few critical things". Pretty much we follow the 80/20 rule with unit and integration tests, and it's served me and different teammates well over years of software development.


This is the same philosophy I subscribe to. It's wasteful to test everything in a non-world-world-is-going-to-end system. Especially if you're running lean and the code may be thrown away in a week.


We have a test suite covering most of the code. We'd like to do more testing are doinh it.


It's scary that so many programmers don't write automated tests when their entire profession is about abstraction and automation. If you want it to work, test it. If you want to maintain sanity, automate it. It's not particularly complicated.


I don't usually test my code, but when I do... I do it in production. https://www.google.com/search?q=I+dont+usually+test+my+code

(Sorry for the obligatory meme reference)


I think your definition of test is flawed. The majority of "testing" is just using your site to see if it is broken. You can write automation to pin point errors faster but it is not the way the majority of the world tests software.


yes, we have integration + unit tests for everything that we do. before we release any code the entire suite gets executed and tested which helps catch bugs quickly.

we even use a subset of those tests in production to make sure all sub-systems are working, not just a ping to the api.getsocialize.com domain which isn't sufficient.

When I first started testing I was rather skeptical. But now that I do it, I wouldn't code without it. There are so many other benefits of testing like cleaner code, incremental builds for native apps, and just general confidence in your deployments that allows us to deploy anytime of the day without worry.


I always test my code. It saves time and money. http://hustletips.tumblr.com/post/19348536703/vet-your-work-...


I Don't Always Test My Code, but when I Do, I Do It in Production.

http://troll.me/i-dont-always-test-my-code-but-when-i-do-i-d...


Jokes aside, I personally try to as much as possible.


After reading this thread, I've realized I have to make sure my next employer actually believes in testing.

I don't know how anyone can move forward in a long term application without having regressions done for you in the form of testing.


We have test suite that automatically runs when new commits are pushed to github.


I answered "We'd like to do more testing but it's too much overhead" but it's not really true. The true answer would be "We'd like to do more testing but it's hard to convince people to really write tests."


My company has a pretty complex system for testing. But almost all code that makes it into production is done with ridiculous deadlines that forces us to skip all of our testing. Ain't that life?


yes yes yes and yes again we're not obsessed about code coverage - but get disappointed when it falls below about 75%

We've a code based which is a mixture of javascript MVC (Backbone) and PHP (Zend)

A healthy attitude to unit testing and dev-ops has saved our back more times than I care to mention.

Also - it's a very useful way to "train" new developers. Spending 2-3 weeks writing tests is a great way to get a feel for a) the code b) house style where new developers can be immediately productive - without risking touching production code on day 1


We have a test suite that checks for as much stuff as we can but our application is really rather complicated, and to test it fully we would need a full time test engineer.


Its never possible to remember to test for everything. And according to Murphy's Law, what will break is the one thing you forgot to test for. So then, why test at all?


That's not Murphy's Law, its lack of coverage.


A good dashboard is better than testing.

Whenever I deploy new code, I make sure I pass all the unit tests, but then I watch the monitor the dashboard and incoming requests to confirm.


Testing is ideal for established business. Testing is inefficient for some early stage startups which very often change the way their product works (features/ui).


I would need some sort of spec or at least a vague understanding of intended behavior before testing. A good day is a day my lead dev doesn't bork the repository.


Yes I test drive my code to describe behavior and relationships between collaborators. I use my tests to validate my code and design, not to "catch bugs".


YES, this is the point!


"I don't always test my code, but when I do, I do it in production." http://i.qkme.me/22sv.jpg


I chose tests all functionality, but thats completely unrealistic. We TRY to test everything beyond just critical items, but we'll never get there.


This year, my motto is: "Tests or it doesn't exist" - don't use any code that isn't tested or the author is unwilling to accept contributed tests.


We test a lot, but would really like to hire a permanent QA tester because it's very easy to miss things when you see the same software every day.


Testing is great, but I find that a set of full-system tests tend to give the most bang for the buck if you can make them run quickly enough.



Sometimes I think: Testing has not invented by a geek. Geeks don't test. Some other times I think: Testing is good. It gives quality.


Being in a software testing class right now, I'm absolutely shocked how many "minimal/no testing" responses and comments there are.


Missing option: I wish we did more testing but the organizational support does not exist. Also it's time to look for a new job.


Why is there no "and also click" option for, "we are unhappy with the amount of testing we do, and are gradually adding more".


Really, downvoted? The survey seems predisposed to result in the conclusion that testing is a waste of time.

My company finds it worthwhile enough that we are never happy with what we have. We could always write better tests, but we need more engineers than we can find to hire. So we are unhappy with the current situation, but are working to improve.


This is against HN guideline[1]:"Resist complaining about being downmodded. It never does any good, and it makes boring reading."

[1] http://ycombinator.com/newsguidelines.html


I think there should be a option between the top two. "We have a test suite that tests most things"

That's where our dev team is really at.


How about an option for: "Mixed bag: some components are exhaustively tested, others not at all, many in between."


I test everything before I send it out. That's my rep on the line and I don't want to be known for faulty coding.


Yes, it's called "user tests". lol

Just kidding, we have a test suite that tests the things deemed critical and some other stuff.


Everything through unit tests

All important features through functional tests

All critical path features through endurance tests

Ad Hoc + user feedback for the rest


You forgot to put the option: "I know tests are great, I know I'll regret it, but I'm too lazy to write them".


If you haven't got tests, you can't automate your deployment.

If you can't automate your deployment, you can't rapidly iterate.


My company doesn't produce a website, but we run a large suite of tests nightly, and before every release.


There's huge gap between all and few critical things. I expect many others fit in there.


"I don't always test my code, but when I do, I test it live."

I wish I wrote tests more. Maybe I'm to impatient. :(


You probably just don't know how to test properly yet. Testing should ultimately give you more time and make your life easier. If it's making it harder or making you take longer (unless it's a really trivial task), it's not being done correctly (yet).


red - green - refactor! It really saves time and heighten the cohesion. 4 me at least.


Sometimes we test, sometimes we let users test!! Either way...we find our bugs...


In production ;-D

Take the fortune 500 approach. It's not a bug unless customers complain.


We are rearchitecting our code base to make testing easier/faster/better


I usually test everything with Selenium - what are other good methods?


http://phantomjs.org and http://casperjs.org/ are amazing (and open source).


Thanks!!!


I don't always test my code, but when I do, I do it in production.



Everyone says they do Unit Tests, but no one "really" does it.


We don't test code where I work, but I'm strongly for testing.


We use a combination of user testing and developer testing...


If by test you mean "try out". Nothing formal.


We test our code in battle...er, production...


Medical product. Heck yeah we test everything.


testing is cool when you do it so we decide to not do it.. it's push you to code better with pain in the ass :)


Outsourced that to the end user. /joke


OMG! Only 773 programmers O_o


we would, if management does not show tight schedules up our dev bottoms.


What will tester do then?


Not like we should


Indeed. Travis-ci.


we just started giving importance to it... :)


about 80% of the line coverage.


Where's the "We have a QA department" option?

lulz


Every single line of code...


MVP BABY, AIN'T GOT NO TIME FOR TESTIN


Real programmers don't test their code, QA does that for me, it comes back as defects later


I don't always test my code, but when I do I do it in production. Stay thirsty my friends.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: