One of the problems with using 'public' DNS servers like Google and OpenDNS is that content delivery networks return the IPs of their nodes that are closest to your DNS servers based on the assumption that you are on the same network as your DNS servers.
I just did a comparison and I am 10ms away from the Akamai node returned by my ISP's DNS servers, 88ms away from the node returned by Google Public DNS, and 20ms away from the node returned by OpenDNS. Even if DNS is faster, it may make everything else slower...
but I am willing to sacrifice a few hundred ms to avoid the several seconds of frustration when typing 'gmail' into the firefox address bar gets me redirected to some stupid ass ISP specific SERPs page =D
The other alternative is to just run your own instance of bind (or dnscache if that's what you prefer). CDN content will usually perform better, but non-CDN content will often lose a few ms because you usually won't have very much in cache. As a bonus, you can perform some stupid LAN tricks.
First, modern CDN's can use anycast (example bitgravity) to control in detail which content server a given user connects too. Second, a CDN operator can anycast it's authority servers to minimize time spent in recursive dns lookups (I believe akamai uses a combination similar to this).
Alternately, a 3rd party dns operator can use anycast to control which resolver a given user uses, ensuring that the resolver is close to the user on the internet. Since google's nameservers are associated with hostnames of the form any-in-XXXX.1e100.net I'd assume they're anycast already or will be in the future.
Open a command prompt, launch nslookup to do the DNS queries.
Once nslookup type server <ip> in the console to switch dns provider, and then just type the host name to have it looked up.
Use a second window to ping the resulting ip addresses.
edit: these instructions are also valid on linux/freebsd and should also work on OSX
More data mining by google under the guise of global awesomeness.
The point most people miss is that all of your habits and information are under one roof and only need one subpoena to get your entire electronic life on DVD. This just adds to what they already know about your searching, emailing , communicating and spending.
The Google product manager responded to TechCrunch saying "no blocking, hijacking, or filtering" and responded on privacy with "Collected data includes IP address (up to 48 hours, to detect malicious behavior against the service), ISP information and geographic information (2 weeks each). The data is not correlated with your Google account in any way."
It may not be correlated, but that doesn't mean that Google isn't getting a ton of commercial benefit from this data. It's like having your own Alexa/Quantcast, but in realtime, and with much better granularity.
I agree. I also think it's fair for the provider of a public service to derive some benefit from running it. The obvious questions are: Is it ethical? Is it honest? (Google phrases it as "Don't be evil.")
In this case, if they are providing an alternative to filtering and hijacking, I see that as a really important step forward. The fact that a lot of people expected those behaviors shows just how bad the situation with DNS really is these days.
We don't correlate or combine your information from these logs with any other log data that Google might have about your use of other services, such as data from Web Search and data from advertising on the Google content network. After keeping this data for two weeks, we randomly sample a small subset for permanent storage.
It sounds like a good faith effort aimed at speeding things up to me.
I agree, but said data would be a sample from a user base that is far from the general user base of the internet, and Google's user base too. Data from Chrome browsers only would be heavily skewed.
It's an interesting idea, but based on some early tests it looks like a losing bet to me.
My ping from a dedicated 15mb Qwest circuit to 8.8.8.8 and 8.8.4.4 is between 57ms and 75ms. However, cached DNS resolutions run at around 12ms from our default DNS servers. Even uncached resolutions are still way faster than the round-trip to Google.
Google Public DNS might be worth it if every single one of your DNS queries would result in a cache miss, but otherwise I don't see the performance improvement Google is gunning for.
I always hated the redirection my ISP's DNS and OpenDNS forces. Every time I make a typo, i have to completely retype the URL because they redirect me...
Now Google can make cash off typo domains by turning it into a Google search and hopefully earning some sponsor link clicks. Not a bad idea. I wonder if they are going to record requested URLs so they know which ones have high traffic to spider more often... I can see a lot of reasons why Google would want to offer this.
Eh, I don't know if they'll be able to do that too effectively because of NAT and whatnot, but they can certainly use this information in their PageRank calculation. Knowing what pages people visit is an even bigger vote of confidence for the site than seeing how many pages link to it.
Yes. I meant if you're not behind a NAT they can correlate your IP with your gmail cookie when you do a search and thereby link a record of the sites you are visiting to your profile to serve you more targeted ads. Apologies for the ambiguity.
I just did a comparison and I am 10ms away from the Akamai node returned by my ISP's DNS servers, 88ms away from the node returned by Google Public DNS, and 20ms away from the node returned by OpenDNS. Even if DNS is faster, it may make everything else slower...