The unit test facility built in to the D programming language has turned out in practice to be a huge win. Sure, it's on the basic side, but the advantages are there's a standard way to do it that is little more than just writing out your tests right next to the function needing testing. This convenience factor tends to make a function look incomplete without the unit test.
No, they should support contracts as Eiffel does or have better type systems as Haskell has. Most problems can be eliminated by making them impossible in the first place ;)
The problem is that this frequently turns into a case of throwing the baby out with the bathwater. I like languages where I spend less time fighting the compiler and more time coding. If tests are the price I have to pay, then so be it.
And ironically enough, it ends up being easier to write test harnesses with dynamic languages because it is easier to mock things up.
With languages like Java, the inherent rigidity requires all sorts of ugly ugly reflection hacks, and factory, impl, interface proliferation to get the flexible testing behavior you get for free with more dynamic languages.
If you need to add new language primitives to make unit testing practical, maybe your language is too cumbersome.
In Eiffel you do not fight the compiler. The idea is that you define a set of assertions that must be true before writing the code. This process strengthens the program as you get a more well-defined knowledge of who-does-what in the program. This "Design by Contract" behaviour is much like TDD/BDD.
In Haskell, you do not fight the compiler either. With practice, you will find that the few type errors the compiler returns to you are legitimate: Wrong calling conventions, typos, API misuse, returning nonsense in a specific case and so on. I have, perhaps, some 5 errors per 100 lines of Haskell code I write that the compiler quacks on. It is equally bad moving dynamically typed programming idioms to Haskell as it is moving statically typed programming idioms into, say, Python.
"Fighting the compiler" doesn't necessarily mean type errors. It could mean that you have to warp your design to fit the constraints of a language, often making much more extensive changes to your program than you would otherwise would've had to.
Though a lot of my complaints about Haskell's type system could be alleviated with a good integrated refactoring tool. If I had one-command operations to:
a.) Convert a plain function to a monadic one
b.) Change the signature and stacking order of a large stack of monadic transformers
c.) Add a new alternative to an algebraic data type, and track down and fix all functions that use it.
d.) Add a new parameter, at an arbitrary position, to a function.
e.) Change a straight return value to a Maybe.
f.) Add a new field to a record or tuple type.
Then I'd find Haskell's type system much less oppressive. What's the status of Leksah these days?
Absolutely. Thinking is overrated; in the absence of new data and additional experience, you end up rehashing the same decisions you thought about 20 minutes ago.
Yes, having to actively think about all of the implications all of the time often leads to analysis paralysis. Automated tests often can alleviate this burden.
It's often better to go quickly go down a path and see how it plays out, opposed to sitting there and thinking about something for a long time. When you move forward, problems and things you didn't think about become obvious.
I agree that tests should be part of the language. For example it would be great to have convenient and natural language support for creating simulated error conditions to force test execution of rarely used error handling code. An essential aspect of production code is to design in the handling of errors and unexpected conditions. This is true across the abstraction hierarchy; who throws exceptions and who catches them? In C code, how do you encode error returns? Who has the responsibility of printing error messages or notifying the user of a detected error? It would be great if these aspects of a system could be exercised as part of automated software testing, and I believe this would be facilitated by language features that could interact with test case datasets to simulate exceptions.
I've always thought the unit test[s] for any given method in a system should be accessible at runtime as a singleton-method of the method object/closure. It would be a similar idea, in effect, to Lisp docstrings, except with another closure in place of a string.
This is more a comment on his motivation than the body of the article, but it seems to me that if running your tests requires anything more than typing "make/rake/ant/whatever test" than you're Doing It Wrong. And once your tests are that easy to run, they're also easy to integrate.