Well I know it's a contrived example, but I don't understand the motivation behind mocking an external library's code. That library should have its own tests.
Say I have three layers of custom code: A calls B, which calls C.
If I want to test B, then I want to mock C, and have my test call B similar to how A does. I want to mock C because C is also custom, and if my test fails, I want to know if it's because of bad B implementation, and not because a buggy C might be confusing matters.
But if B also calls D from an external open source or vendor-supplied library, I don't usually want to mock D. That just adds needless complexity to the test, and reduces my focus on my own custom code.
An exception would be if this library code makes its own network call or something - then you might want to mock it to save time.
Anyway, mocked unit tests become far simpler if you maintain the right focus. Use test A to call B (passing in canned fixtures if necessary), while B mocks C only to maintain focus on B's implementation. If you start getting involved in trying to mock external library code, or even internal private methods that B calls, you'll have a bad time.
The advantage to maintaining that kind of focus is that refactoring becomes easier. Want to change the name of C? Your IDE should handle refactoring your test, too. Want to change the implementation of B? You don't even need to change your test at all, just make sure the right values are still there in the return value. Maybe you'd need to add a couple of assertions, but that's it. If you're looking at having to do a serious refactoring of your unit test in those cases, then it probably just means you're still designing your code architecture and things are still really fluid. And in that case, it would make sense that you might have to throw away your test, because by definition it means you are still deciding on what your specifications are.
The guys who came up with the mock object approach to TDD would say that you shouldn't be mocking external libraries directly. You want your own time abstraction which is probably far simpler than what you get from a library that has to satisfy everyone's needs.
I think that building that level of isolation between you and your framework or library is just basic good practice. The fact that you need time doesn't change, but the way that you get it might.
I'd rather have a program that does what it does in X lines of code, than a unit tested, mocked, codebase in 5X lines of code. Sure you have tests, for whatever they're worth (I'm somewhat skeptical of TDD in the first place), but you have so much MORE code.
It's just basic separation of concerns. I've seen too many development organizations brought to their knees by the fact that they don't have any layering between their logic and the libraries/frameworks they use. It's a very real problem.
That's interesting. I don't have much experience with it, but when I've seen similar stuff it looked like an anti-pattern to me. Why should developers need to learn your specific wrapper on top of a popular 3rd party component? The internal thing is most likely not documented as well, and common problems don't have answers on Stack Overflow. It requires extra work to use additional features of the library.
I'm similarly wary of convenience libraries that provide marginally simpler APIs on top of standard libraries.
I'm not convinced it's a good idea. I wish I had your experience. Any good reading material?
Wrapping a library in your own concept allows you to define what's right for your application. It has the effect of pushing the third party library out to the edges of your system, replaced with whatever you wrapped it with. This makes replacing it, for testing or any other reason, much easier than if it's proliferated throughout your code unwrapped.
Wrappers should be simple so creating them and documentation, beyond a few integration tests to understand how the library works, shouldn't be a huge concern.
I've been doing a lot of Javascript stuff lately and libraries like Dojo have 5 different ways to locate an element in the DOM. I have no idea if all these ways will be around in a future release, or, right now, which one is better. Unifying the Dom select code behind our own interface keeps things uniform throughout the app, instead of each of us using a different function, and lets us try out the different library functions, or different libraries, easily.
But where design comes into place is to determine where and when you need to separate these concerns. There are always times to do this. There are also times not to.
For example (Perl example here), in LedgerSMB we layer some things. We layer the templating engine. We layer (now, for 1.4) arbitrary precision floats. We layer datetime objects. Many of these are layered in such a way that they are transparent to most of the code.
But there are a lot of things that aren't layered because there isn't a clear case for so doing right now.
(As a footnote, PGObject requires that applications layer the underlying framework because there are certain decisions we don't feel comfortable making for the developer, such as database connection management.)
I agree. Wrapping your library for testing is really pushing the boundaries of sensibility.
Low density code that doesn't provide application logic is one of my pet peeves.
There is a philosophy that started in the 90s (and Microsoft was proponent of it [1]) that adding more layers to an application would make it more malleable, but in a typical CRUD web app, layers only bloat the code and make it slower.
I'd suspect adding a wrapper to a date class just for the sake of testing is more likely to add bugs than remove them.
Not exactly the same case, but it sure is nice to be independent of calling System.getCurrentTimeMillis() in the code under test. One example of how to do it (in a simple case): "TDD, Unit Tests and the Passage of Time" http://henrikwarne.com/2013/12/08/tdd-unit-tests-and-the-pas...
That sounds more like an argument for putting an interface in front of an implementation on the app side, and then mocking that interface in the test. Which is totally fine, because then you are isolating the custom implementation (a light shell to the external library). As opposed to mocking the external implementation in the test.
That's more along the lines of what a lot of TDD literature says.
Write adapters or facades that wrap external libraries, use those in your own code's unit tests. This makes you less bound to a specific library as well. Don't mock the outside world [0] for testing your adapters/facades/whatever, but do integration tests that cover your adapters using the real outside world.
You'd also do complete end to end tests where the entire system is used as if it were in production (acceptance testing). TDD makes a lot more sense if you think of it in those three layers: acceptance, integration, unit.
[0] Outside world means anything that's not your code -- networking, filesystem, external libraries, etc.
Say I have three layers of custom code: A calls B, which calls C.
If I want to test B, then I want to mock C, and have my test call B similar to how A does. I want to mock C because C is also custom, and if my test fails, I want to know if it's because of bad B implementation, and not because a buggy C might be confusing matters.
But if B also calls D from an external open source or vendor-supplied library, I don't usually want to mock D. That just adds needless complexity to the test, and reduces my focus on my own custom code.
An exception would be if this library code makes its own network call or something - then you might want to mock it to save time.
Anyway, mocked unit tests become far simpler if you maintain the right focus. Use test A to call B (passing in canned fixtures if necessary), while B mocks C only to maintain focus on B's implementation. If you start getting involved in trying to mock external library code, or even internal private methods that B calls, you'll have a bad time.
The advantage to maintaining that kind of focus is that refactoring becomes easier. Want to change the name of C? Your IDE should handle refactoring your test, too. Want to change the implementation of B? You don't even need to change your test at all, just make sure the right values are still there in the return value. Maybe you'd need to add a couple of assertions, but that's it. If you're looking at having to do a serious refactoring of your unit test in those cases, then it probably just means you're still designing your code architecture and things are still really fluid. And in that case, it would make sense that you might have to throw away your test, because by definition it means you are still deciding on what your specifications are.