Viterbi is used for decoding e.g. HMMs, when there are multiple possible state sequences given the observations, and you want to find the most probable. In a normal (non-hidden) Markov model, you can just follow the state transitions and get out a probability (or go to a state and see what the next possibilities are).
An HMM is exactly what you have in both predictive typing and speech recognition, since in both cases you've got some form of sensor noise to deal with.
But predictive typing usually corrects per word. Given a sequence of words, you can give the top suggestions (like Swype and others do) using a non-hidden Markov model. For partially typed words, it's easier to take words from the same suggestion list and rank them by combination of probability and some similarity measure (e.g. edit distance). Possibly complemented by non-suggestions through some other method (e.g. levenshtein automata). If you want to correct
predctiv tping an spech recgnition
Yes, you you'll want to use a hidden Markov model. If you already have
predictive typing and speech recgn
then a normal Markov model will serve you fine. And since such predictive keyboards do corrections per word, they are like the latter example and not the former. For speech recognition (e.g. Google Now), you indeed need an HMM.