Just copy paste the REPL interaction into a doc string, and you're done. Apparently, it recognizes ">>>" as the REPL prompt, and the following line as the expected output. The equivalent for Clojure would be a neat addition.
It's a neat hack -- I think there's definitely a gap though, in that there could be a tool that does some smarter introspection of your history and the state of the REPL to generate a unit test. v1 would be quite rudimentary, but after many iterations this could be an almost magical tool.
The "code-as-data" homomorphic semantics of lisp would make building something like this quite interesting, as it would probably need to transform some of the code you've REPLled from statements into assertions, etc.
I think you would end up writing code in the REPL restricting what you would type because you know it'll become a test.
Then you wouldn't using a REPL for what a REPL can offer, but writing test code.
Unless it was really magical and would cover 100% of anything that someone could type in the REPL. If you were only using a subset of the REPL/languages features because you know your to-test-converter doesn't like some stuff, then you're writing the tests anyway.
I'm confused. This would be a REPL enhancement, meaning it would be something you use as needed.
Currenly, using the clojure REPL to test things comes with a twinge of guilt, as it is not being captured for regression tests. (and I am too lazy to write unit tests separately)
This would make it so that using the REPL would cycle between two "styles", ad-hoc experimentation and then, when you've found some repeatable behavior you want codified in a test, capture mode. These can in some cases be distinct processes (physically and mentally) and in other cases overlap so much as to look and feel like the same thing.
Yes, yes I am. :) The reason being, that first, it's annoying to have to set up the boilerplate for the file. Second, it's annoying to have to convert what I just exercised in the REPL into a test. (Setup, tear down, asserts.)
The reason TDD works and is fun is that you are using tests to learn and explore. It just so happens the artifact of that learning ends up living forever as a test. In a REPL, I'm doing that same learning and exploration already. The act of writing a test becomes as exciting as filing a TPS report.
Here's an example. I've just implemented a new function, and REPL'ed it to solidity. There were about 4-5 ad-hoc calls I made to the function to prove that it worked. I just finally got to the point where I can call it using my 4-5 different arguments, and it always outputs the right thing. Using readline, my arrow keys, and my enter key, I'm repeating the same series of steps over and over until the function works. We all do this. Win.
Now, I'm at a crossroads. Do I just start working on the next piece of the project? I know this piece works, I'm happy with it.
But wait! What if something changes. I need to write a test don't I. Sadness consumes me, since testing is slowing me down. I've already exercised the code, I already know it works, and I've already written the tests, albeit sloppily, in the REPL. Why do I have to switch gears now and start writing a file, running a test runner, and so on?
The truth is: I won't. I'll move onto the next thing, not breaking my flow and not doing something boring, something I already know the outcome of, instead of doing something fun: the next feature.
Maybe those who do switch off and go through the motions to write a test, repeating themselves, are more noble and careful in their programming. But I humbly suspect most of us are more lazy than noble :)
Just want to say that I've been thinking about REPL vis-à-vis unit testing for a few years now and my experience and conclusions match yours very closely. You've done a nice job of articulating them (here and in the root comment).
I don't know, I'm still not convinced. Guess it'll have to be one of those things that I might change opinions after trying (if someone ever comes up with an implementation).
You're too lazy to open a file, type the unit tests and hitting CTRL+S but you're not lazy to open the REPL, type the unit tests and hit capture?
If you can't automate 100% of the REPL-into-test feature, if you need two mindsets/styles/etc, if you still need to "find the behaviour codified in a test", then you're just duplicating in the REPL the same workflow and results of writing the tests in a file. They need to overlap 100%.
How much context do you need? An entire memory dump? The complete REPL history? If IO is involved, do you need to somehow guarantee the same files are available at test time with the same contents?
It seems to me the trick is to set sensible limits on the context of the current REPL state preserved at test time, in a way that works for most kinds of common unit tests. I believe this is the "magic" of which you speak.
I dunno. I think this idea occurs to everyone who understands unit testing and then encounters REPLs; it certainly occurred to me under those circumstances and I got excited about it for a while too. Over time, though, it has struck me as less and less obviously good. Though you're right about where the two approaches to programming overlap, and I agree with you that REPL > tests in those areas, there's also considerable territory where they don't overlap. I suspect that xor represents an impedance mismatch that makes "test capture" not as feasible as it seems at first.
I don't mean to pour cold water on the idea, though; if someone figures out a way of doing it that's useful I'd happily change my mind.
http://docs.python.org/library/doctest.html
Just copy paste the REPL interaction into a doc string, and you're done. Apparently, it recognizes ">>>" as the REPL prompt, and the following line as the expected output. The equivalent for Clojure would be a neat addition.