I would add one more item to the list: replicate the results of at least a few "classic" deep learning papers from scratch in one of the popular frameworks (TensorFlow, Torch, Caffe, etc.), instead of downloading code written by others. For example, build and train Alexnet or one of the VGGnets, a Word2Vec model, an image captioning model (joint CNN and LSTM RNN), and a pong- or breakout-playing AI (CNN with reinforcement learning). It's possible to do all of this on a single machine with a relatively inexpensive GPU.