If that question is too general, which it is, then consider the case of building a data structure and its accompanying algorithms. Now, how, IRL, is code verified both for correctness and for desired performance.
Right now, I rely on running a bunch of functions that I write, but I was wondering if there's anything out there that's better (or sort of a methodology that dictates which tests one ought to write).
(if the question is still too general, lets consider the case for Java, and maybe C).
It's kind of a quantum physics problem, at least with respect to testing in general. Code that is completely untested hides all errors, and code never put through its paces won't show a performance problem. As it gets closer and closer to the "real world" (compiling, initial tests, beta, 1.0, 2.0, etc.) bugs and scaling issues will show up(often somewhat "magically"). The best you can do to counter this is simply to make the most efficient use of your time given the quantity/complexity of features, schedule, and available manpower - so that the remaining time can go towards a thorough test cycle.
Code built within restricted computational models(stronger type systems, garbage collected memory, functional-style code, relational logic...) can eliminate entire classes of errors. This doesn't eliminate the benefits of tests, but it makes it possible to focus your tests on a smaller subset of all errors.
Code with extensive ongoing review processes(e.g. space shuttle code, or perhaps the Linux kernel) can eliminate a different class of errors from regular tests or restricted models, because it uses the power of human minds to reason through the concepts repeatedly; a mistake made by one programmer is not likely to be repeated in exactly the same way by ten or twenty of them.
Also worth consideration is test scaffolding and debugging tools. In a large codebase, errors can appear farther and farther from their origin. This leads to a "test suite" (unit tests, functional tests, example datasets) run more-or-less independently of the application. For some kinds of applications, relatively elaborate debugging features may be necessary to display and step through core data structures while the app runs. Debugging-related features are easy to overlook, but are often well worth the time spent, and I have taken towards adding them whenever I encounter a class of bugs that they would help address, rather than to just muddle through the first instance and say "hope THAT doesn't happen again!"
Also, to a large extent, language and environment dictates debugging methods - C code benefits from a machine-level debugging system like gdb, but in languages with runtime reflectivity like Python or Ruby, you rarely need more than a print statement to uncover a problem. If you are working with an embedded device instead of a desktop OS you may have a remote monitor system or an emulator. If you're working on a webapp, you have server logs and browser-level tools. Et cetera.