That being said, the Atoms themselves are nor a good benchmark in performance / Watt. I'd rather be interested in a comparison vs. current gen. xeon or interlagos systems. Sounds silly, but ARM has been making progress and I could see them being used in the future as companions to GPUs in computing clusters. With current GPGPU computing models like OpenACC it does not really make sense to put 16 race horses (Interlagos) besides an ant colony (Fermi GPU), except if you head for high flexibility.
Well, I think Intel wants Atom to compete with ARM for that market, so it may be relevant.
There's also Nvidia's Project Denver, which will probably come out in 2014. It's based on the 64 bit ARMv8 architecture, it's a custom CPU made in collaboration with ARM, and I think they want to pair it with their next-gen GPU architecture Maxwell. It's intended for servers and supercomputers.
I've heard about that Nvidia project. I think they are on the right track. The only thing that's missing is enough programmers (und thus software) for this model. As an example (and I'm saying this as a layman in terms of databases) I think that DBMS might be able to profit a lot from the GPGPU based model. For high read traffic databases you could scale a system with n GPUs, based on how much "storage" the database needs (the storage being the GPU ram, continuously mirrored to harddisks when writes occur).