This makes it clear why dynamic programming is not easy. The problem occurs already in step 1 -- we naturally tend to think in loops, not recursions. So before you find a recursive algorithm, you probably come up with a loop algorithm first. Then you have to try to convert the loop into recursion, then go back to loops that use caching. And hope that the result is more efficient than the loop algorithm you started with.
Speak for yourself, there are dozens (dozens!) of us for which recursion is the more natural way to think about most algorithms. Joking aside, DP is very straightforward in more functional languages like Haskell where recursion is the default way to do iteration anyway.
Indeed, to me, step 1 is the hardest. Not only you need a recursive algorithm, but you also need an algorithm where memoization is possible.
Step 2 and 3 are more systematic. That is, if you understand the Fibonacci sequence example in the article, you can probably do all of the problems. It still requires training, especially if you are doing competitive programming, but it is always essentially the same thing.
Finding the algorithm is the tricky part as there is no set formula.
That's why the Fibonacci sequence is a good introduction. You know where you are starting from: part 1 is trivial as the formula is given to you. And you know where you are going, as you probably already have a good idea on how you are going to implement it iteratively. So you won't get lost in the problem solving part, which may be hard and requires a lot of attention, instead focusing on the basic technique. Once well understood, it is time to solve the actual problems, where step 1 is not trivial and where intuition is not enough for finding the final solution.