This is not the thing to test. DNS resolution time is important, but even if the google resolver returned results faster than your local ISP resolver it would still make your browsing in the general case slower. The problem is that the IPs for the resolvers have presence in very few locations around the world. I believe this number is just three at this time (Virginia, London, Taiwan). The big problem with this is the effect it has on CDNs and GSLBs. The DNS response you get for large sites, like www.bing.com for example, depends on the location of the machine asking the question. If you use your local ISP as your resolver then you'll get an answer that's close to you. If you use the google resolver you'll get an answer that's close to the google resolver and that will generate additional latency for all the rest of the browsing session. This obviously will affect all big sites that are NOT google. They're obviously aware of the problem since they have special cases for their own sites. If you have access to machines in different countries you can see the responses you get from 8.8.8.8 have good locality properties for google sites. So I'm guessing they're just forwarding the queries to their location-aware DNS servers. Obviously they won't do that for akamai, and others.
End result: you're farther away from the resolver, farther away from your web pages, farther away from your CDN. I don't see how you can make a case that this is an improvement in general.
You can also just use namebench (http://code.google.com/p/namebench/): It hunts down the fastest DNS servers available for your computer by running a thorough benchmark using your web browser history, tcpdump output, or standardized datasets in order to provide an individualized recommendation.
Namebench is completely free and does not modify your system in any way. This project began as a 20% project at Google and runs on Mac OS X, Windows, and UNIX, and is available with a graphical user interface as well as a command-line interface.
But I am wondering if this is really the thing to test: Unfortunately, this is extremely difficult, if not impossible, for us to independently test. It is a bit like looking for your lost car keys under the lamppost even though you lost them down the alley. We measured what we could measure, but that isn't what Google says is better.
Also, this does not test proper standard behavior, which is the response to a non-existing domain. And the fact that OpenDNS seems to do something goofy when you search google for things.
For several years now, one of many common things I do for my client networks as well as friends/family has been to put them onto OpenDNS. They have been speedy, stable and FREE from the beginning during a time when you had to pay for most 3rd party DNS services. They also offer amazing controls like easy content blocking for free compared to hardware devices that charge good money to do the same for business networks.
Google's Public DNS is painfully slow for me. I have no idea what it is, but when I tried switching DNS servers last night I couldn't load anything within a reasonable time. For fun, I started a request in Safari with Google's servers, and let it sit for a few seconds. Then I switched to OpenDNS servers mid-request and it loaded instantly. I'm sticking with OpenDNS for the time being.
It's not too bad for us in Portland, OR but that's why we wanted to make the tool available for everyone. Hoping to get some good results that we can share next week. My guess: it's great for some folks and terrible for others (as it is in your case).
Yeah. What's strange is that I'm getting really good ping times of ~40 ms as an average. So strange. Oh well, I'm happy with OpenDNS (which is bearable once the OpenDNS guide is disabled).
This is similar to what I found. It is still faster to use my local DJB dnscache than any of the ISP DNS servers. And Google DNS was slower than the rest.
I did notice that level3 and OpenDNS seem to cache better -- they get considerably faster on the 2nd request.
[4:55pm:~] DIVISION:tqbf [0:1]% nsping -z amazon.com 208.67.222.222
NSPING 208.67.222.222 (208.67.222.222): Domain = "amazon.com", Type = "IN A"
+ [ 0 ] 55 bytes from 208.67.222.222: 29.254 ms [ 0.000 san-avg ]
...
+ [ 35 ] 55 bytes from 208.67.222.222: 28.894 ms [ 43.104 san-avg ]
^C
Total Sent: [ 36 ] Total Received: [ 35 ] Missed: [ 1 ] Lagged [ 0 ]
Ave/Max/Min: 43.104 / 232.379 / 24.222
Google:
[4:56pm:~] DIVISION:tqbf [0:1]% nsping -z amazon.com 8.8.8.8
NSPING 8.8.8.8 (8.8.8.8): Domain = "amazon.com", Type = "IN A"
- [ 0 ] 100 bytes from 8.8.8.8: 71.366 ms [ 0.000 san-avg ]
...
- [ 27 ] 99 bytes from 8.8.8.8: 61.967 ms [ 86.705 san-avg ]
^C
Total Sent: [ 29 ] Total Received: [ 27 ] Missed: [ 2 ] Lagged [ 0 ]
Ave/Max/Min: 86.705 / 223.928 / 48.951
AT&T, via my home router:
[4:57pm:~] DIVISION:tqbf [0:2]% nsping -h www.amazon.com 192.168.1.254
NSPING 192.168.1.254 (192.168.1.254): Hostname = "www.amazon.com", Type = "IN A"
+ [ 0 ] 48 bytes from 192.168.1.254: 48.595 ms [ 0.000 san-avg ]
...
+ [ 20 ] 48 bytes from 192.168.1.254: 34.261 ms [ 36.207 san-avg ]
^C
Total Sent: [ 21 ] Total Received: [ 20 ] Missed: [ 1 ] Lagged [ 0 ]
Ave/Max/Min: 36.207 / 59.454 / 30.943
But, note the + in front of the OpenDNS result, which means we didn't get NXDOMAINs for random names, which is, all due respect, some bullsh*t right there. The results just for "www.amazon.com", which read straight from the cache and don't measure the performance of the recursive fetch:
This is very broken default behavior. It has real security problems (it violates an assumption of the same-origin policy that scopes your browser cookies), it disrupts email, it breaks any application that needs NXDOMAIN (whether you know it does or not), and it's part of an arms race to fuck up^H^H^H^H^H^H "monetize" the infrastructure.
Google doesn't redirect NXDOMAINs. That's worth several milliseconds of response time for me.
> But, note the + in front of the OpenDNS result, which means we didn't get NXDOMAINs for random names, which is, all due respect, some bullsh*t right there.
Would you care to explain that remark to people like me who don't know the ins and outs of DNS? What's an NXDOMAIN? And why is it bullshit?
NXDOMAIN = non-existent domain. In this case instead of the DNS server telling your resolver a given address doesn't resolve, it's telling it that they all resolve to OpenDNS's search service. There's a summary of some of the issues here: http://www.semicomplete.com/blog/geekery/comcast-dns-hijack-...
There's nothing at all wrong with your test, it's just different from mine (to be perfectly honest, I didn't even read yours closely at first; I just wanted to bust out nsping).
By querying 1000 popular sites, you're really just testing the network and software performance of the three servers. Every one of those names is guaranteed to be in their cache. By randomizing the labels, you can factor the cache out of the benchmark. Does this matter much? Meh.
I agree with your performance conclusion (your ISP is fastest) with two caveats:
* AT&T's DNS sucks ass; it's fast right now, because it sucks and wants to screw up my benchmark, but 10 minutes from now it's going to go back to being nonresponsive. I'll happily surrender 50ms for consistant performance.
* The major win for third-party DNS (and the same win for running your own local cache) isn't performance; it's that it always Just Works.
No matter what Google says about 100ms responsiveness differences decreasing user engagement by 20%, I don't believe that 10-15ms is noticeable.
How is this supposed to produce any sensible result, since their testing servers are in vastly different locations than anyone's desktop? Their testing server can't even REACH the caching resolver running on my home gateway, and even if it could, it would be many more network hops away from it than my PC is.
I realise running bind or djbdns' dnscche is not for everyone. However, even your ISP's DNS server will be much closer to you network wise and thus experience less latency. No one should ever use OpenDNS or Google DNS.
(reposted because comments from my old account are invisible)
End result: you're farther away from the resolver, farther away from your web pages, farther away from your CDN. I don't see how you can make a case that this is an improvement in general.