Cool ! At some point, you could also consider building an index: retrieve "possible candidate corrections" by issuing a query against this index, and score the candidates using some combination of features (such as edit distance, phonetic similarity etc).
As you mentioned, phonetic algorithm like Double Metaphone (stored e.g. in an SQL database) gives you probably better results (if you order the results by its score).
I was also wondering about using phonetic matching as a substitute for spell checking.
I guess one downside to phonetic matching vs spell-checking would be that you're not really telling the user that results are shown for a different term from the one they entered?
For the input word, generate all variants which are within edit distance of 2, generate their hashes, and check if any of those hashes are in the list of hashes generated from corpus.