This is cool, but unlike some other recent neural game players [1] [2], it doesn't look at the screen at all. It also has an objective function which is tightly coupled to this game. Therefore this work is not easily generalizable to other video games, nor would it be useful for a real-world self-driving car.
It would be easy to replace the input features with representations from a pixel-level convnet, to make this is a "real" self-driving car, going from pixels to commands.
Anyone interested in this type of research: consider cloning the repo and implementing this modification, it would make a great starter project.
It is quite easy to change the input features as pixels and fit into convnet under Keras (That's why I love Keras so much). However, gym_torcs only support 64x64 pixels and it is hard to see by human eyes, IMHO.
Staying in the middle of the track is not a necessary requirement in the reward function. The reason I include it is to speed up the learning time in the beginning. You can remove it once you learn a reasonable policy and see it the agent can find the optimal apex path. I will do a test tonight.
Just like in human world : You first learn how to drive before you learn how to drift the car.
Hi~I used Aalborg track as my training dataset and I used Alpine1 track as my validation dataset. The Alpine1 track is 3 times longer than Aalborg. As you can see on the video, the agent can drive reasonably OK on the validation dataset.
Hi Bluetwo, I am currently travelling to the San Francisco right now. Can you send me a e-mail yanpan@gmail.com so I can contact you and e-mail you the result directly when I back to Hong Kong?
[1] http://vizdoom.cs.put.edu.pl/ [2] https://deepmind.com/research/dqn/