The above poster used 'TDD', not 'unit test', they are not the same thing.
You can (and often should!) have a suite of unit tests, but you can choose to write them after the fact, and after the fact means after most of the exploration is done.
I think if most people stopped thinking of unit tests as a correctness mechanism and instead thought of them as a regression mechanism unit tests as a whole would be a lot better off.
Also as an dependency canary: when your low level object tests start demanding access to databases and config files and networking, it's time for a think.
Also a passing unit test always provides up-to-date implicit documentation on how to use the tested code.
None of this either/or reasoning is correct, in my experience. In practice, I write tests both before and after implementation, for different reasons. In practice, my tests both test correctness, and of course they also work as regression tests.
Writing before the fact allows you to test your mental model of the interface, unspoiled by having the implementation fresh in your mind. (Not entirely, since you probably have some implementation ideas very early on.)
Writing tests after the fact is what you must do to explore 1) weak points that occur to you as you implement, and 2) bugs. After-the-fact testing also allows you to hone in on vagueness in the spec, which may show up as (1) or (2).
You can (and often should!) have a suite of unit tests, but you can choose to write them after the fact, and after the fact means after most of the exploration is done.
I think if most people stopped thinking of unit tests as a correctness mechanism and instead thought of them as a regression mechanism unit tests as a whole would be a lot better off.