I followed the last Google AI challenge pretty closely, and enjoyed it, but how does the organizer Jeff Cameron keep getting away with invoking Google in the competition's name? He has basically zero affiliation to Google, and the last round of the competition had some pretty embarrassing snafus. (For instance, registration was broken for quite a while because he was using the Google SMTP server to send out messages, and when the competition opened and his registration script starting sending out huge numbers of registration emails, Google shut him down as a probable spammer.)
As read on the IRC channel (#aichallenge on freenode) some time ago, someone involved worked at Google and managed to get an OK to use the name, even though Google has nothing to do with it. Don't ask me how that happened.
Go to the IRC channel for a while and you are sure to hear this story, since new people constantly ask why Google doesn't put more resources into the challenge.
I wish Google was a little more involved. The competition could've used some resources or funding last time. The format has a lot of potential since any language[1] can be used as long as you can get it to run on Ubuntu 10.something and make a starter package.
[1] as opposed to most other competitions where one is restricted to the usual suspects (Java, C#, Python).
I did a git clone last night and carefully read through the starter kits for Common Lisp, Java, and Python. A long time ago, in my book "C++ Power Paradigms" the last example was using a genetic algorithm to train a recurrent neural net for the Sante Fe Ant Trail problem. My original idea was starting with weights represented with relatively few bits, and adding less significant bits slowly during training. My hope was that I would quickly find reasonable areas of weight values so the search for good values with a reasonable number of bits per weight would not take too long. After I wrote that John Koza told me that I had an interesting idea but probably not useful. That said, I am tempted to brush off my old idea and try it again.
The Common Lisp starter kit was implemented quite differently that the Python and Java kits (who I think were designed and written by the same person). The Common Lisp kit just had one example bot, so maybe it is still being worked on.
Perhaps the Python and Java kits used the latter (as I should have done).
More example bots with probably not be added to the CL package by me (but feel free to do so!). I agree with the description on one of the wiki pages that the starter packages should be pretty minimal so that users will see improvements quickly when they start tinkering with it.
I've just taken care of the plumbing so the participants can get started with the fun things right away :-)
Choosing Ants as the next challenge is a bit of a bet. I'm a bit afraid that the game is too random to get really interesting AIs. Fog of war + food spawning at random places can bring early advantage to those players that explored the map 'just right'.
Disclaimer: I really enjoyed playing the Tron and PlanetWars challenges. Although I was a bit disappointed with my final ranking in PlanetWars (~80), I think that game was close to perfect.
The official start is still a couple of weeks away, although you are free to start working on a bot already. Don't complain if the specifications (or the whole game!) changes though.
My guess is this time the challenge will last about two months. Three months like the last one (Planet Wars) was felt to be a little long by most participants and the one before that (Tron) was a little short in its duration of three or four weeks.