Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ISPs Improve Their DNS Hijacking And How To Stop It (hackercodex.com)
68 points by SnowLprd on May 2, 2012 | hide | past | favorite | 50 comments


This is also how OpenDNS makes money. Neustar does the same. And probably others too. They call this "DNS service". Anyone can run a resolver, including your next door neighbor. Unless you live next to a datacenter, your neighbour's "DNS service" will likely be faster than Google's or any commercial vendor. It's been suggested the optimum number of users for a decent cache is probably around 10 [source: IPJ]. Can you trust 10 people not to poison the cache? How many users do you think the "DNS service" providers have? Can you trust each and every one of those users? As for DNSSEC, most people running authoritative nameservers for websites do not support it, let alone most domain name registries.

Interesting to note: no rDNS for either of those IP's.


I believe Comcast stopped this as of their network-wide DNSSEC deployment.

Either way, the article provides a pretty interesting way around it, but I can't expect ISPs hell-bent on false lookup spoofing to sit on their hands for long enough to make this a practical long-term solution.


That's one benefit of DNSSEC. If an ISP adopts it, including NSEC, they can't also do NXDOMAIN spoofing with their DNS servers. Mutually exclusive.

But for ISP's that insist on doing this, there are various workarounds besides the one mentioned in the blog post. It's quite easy. Tunneling inside HTTP is a last resort. At some point it's not worth the trouble for the ISP, e.g., to peek into every packet trying to stop users from getting a proper NXDOMAIN response.


This is one of the many problems that DNSCurve solves, by setting up encrypted and authenticated connections between you and any DNS servers you decide to trust.

http://dnscurve.org/

OpenDNS already supports it:

http://blog.opendns.com/2010/02/23/opendns-dnscurve/


It's great they support it, but how many people have DNSCurve enabled authoritative nameservers? And how many OpenDNS cache users are routing their queries through a DNSCurve forwarder?

It's the same as with DNSSEC. To get the full benefit, every point in the chain needs to be on board, from the source to the sink. If any link in the chain doesn't support it, you're SOL.

If you read Dempsky's announcement carefully, you notice the words "whenever possible". That could be almost never depending on DNSCurve uptake among DNS admins and users.

An easier solution is to just run your own instance of dnscache.

That's what OpenDNS uses.


Meanwhile, OpenDNS hijacked NXDOMAIN results and tried to show me ads the last time I used them. Have they stopped doing that?


Yes, they still hijack NXDOMAIN among other things. The "other things" you can mostly disable with settings, but there's no way to get rid of the NXDOMAIN hijacking - at least for free, maybe you can pay them money to. It annoyed me sufficiently I switched back to Google DNS, even though I was getting better streaming performance with OpenDNS.


I imagine typo domain suggestion/resolution is one of their key selling points for people who don't know what NXDOMAIN is (ie most of their customers), removing it seems unlikely.


Fair point, but I think it's the browser's job to decide what to do when the user types a domain that doesn't exist. Chrome gives you a search page if you enter a nonexistent domain.


I don't quite understand how the new method of hijacking gets around using 3rd party DNS servers. If I ping nonexistentdomain.tld, doesn't that lookup occur at the 3rd party server? How does my ISP inject its own IP address for that domain if the IP address is coming from (for example) Google? Are they intercepting the entire DNS query?


The claim appears to be that they are intercepting queries to 3rd-party DNS servers, yes.


> The claim appears to be that they are intercepting queries to 3rd-party DNS servers, yes.

[my head explodes] This is just begging to be hacked: bankofamerica can be mispelled a number of ways, and I doubt BoA has covered them all.


Yup.

This is why it's amusing to watch trademark registrants going after domain names based on confusing variations of their registered mark.

Meanwhile their trademarks are being hijacked by DNS services and ISP's, in order to show ads, probably far more often (since these services and ISP's have huge user bases).

And for the trademark registrants, ISP's are much easier to locate and take action against than evasive domain name miscreants.

Pass the popcorn.


If so, they're breaking so much functionality, protocol and ethics in the process. I don't think it's adequate just to bypass it, ISPs need to be confronted when they are sniffing/modifying traffic.


It might not replace all DNS traffic. Maybe it just replaces NXDOMAIN records with its own responses.


Never trust the DNS related claims of someone who uses 'ping' to perform a DNS lookup.


There's a solution to all this, where you will always get the right response, and it even obviates the need for DNSSEC or DNSCurve.

And that is, write your own resolver that only sends nonrecursive queries to authoritative nameservers.

If the DNS admin has configured DNS simply and sensibly, it will only take you 2 queries to get a name resolved. It's very fast.

If they are using Akamai or some other CDN, or they have a love for CNAMES and indirection, it can take many more queries. Sometimes up to 7.


Only 2? Wouldn't you have to hit the root, then the tld server, then the name server for the domain, and if it isn't the root domain (example.net), but if it is a sub-domain (www.example.net) which could potentially return more NS records ... and the process would have to happen all over again.

net. => root (return tld)

example.net. => tld (return ns)

www.example.net. => ns (returns ns2)

www.example.net. => ns2


Someone is paying attention. ;)

The tld server ip's are "hardcoded" into the resolver application and revised as needed from the root.zone.gz file periodically- these servers do not change very often. The application is just a simple lexer that can be easily edited and recompiled. Writing this thing was a learning experience: the vast majority of cases, DNS lookups follow some very predictable patterns.

So, to answer your question: that first lookup is unnecessary. There's no need to keep hitting the root to get a relatively small number of tld server ip's that rarely change or go inactive. It's easier just to download the root zone regularly to check for changes.

As for subdomains, such as www, that's the CNAME indirection to which I alluded. Everytime someone adds indirection, whatever their reasons (e.g. load balancing, CDN, etc.), it slows down the lookup process by necessitating more lookups. It's a small tradeoff that probably few people pay attention to.

From the resolver's perspective, it is more work and it does slow things down compared to the typical 2 query resolution.

Note I still say 2 queries because even with recursive resolvers like the ones we all use, the tld server ip's for the popular tld's are almost always already in the cache. You only need to lookup a single domain.com and the com tld server ip's will be there for all future queries.


I've thought about writing my own recursive DNS resolver that followed that exact pattern (with caching according to TTL's off course).

So yes, you may only need to hit the root once to get the tld as it will be cached, but it is still a hit. I am not sure that downloading the root.zone.gz instead is necessarily required, especially with the amount of new tld's that they are planning on adding it would amount to a lot of wasted resources.

Also, for some domains (those in the UK are the ones that popped into my mind), you have sub-domains such as co.uk.

So that is another extra lookup... and depending on whether or not you want to use ANY or not in the lookup you find that if you query ns1.nic.uk for co.uk. (A) you get an SOA record back, but no NS results, so at that point instead of just being able to continue you'd have to retry with co.uk. (ANY). At that point you get back a truncated result, and need to retry over TCP...

Now you can continue on to ns1.nic.uk for co.uk. and ask it your question mydomain.co.uk. so and and so forth.

You've piqued my interest and I am thinking I may start keeping logs from my local recursive DNS resolver and start looking at what the cost is now versus what the cost would be if recursive DNS resolvers would go step by step themselves (keeping in mind TTL's and the like).


What is so insidious about ISPs serving ads on un-occupied domains? I can't see who it hurts and seems like a rather clever way to monetize dead space.


If I have a script that is listening for a response to ping or trying to confirm an HTTP request or something, I damn well want to know when the host is unreachable.

I don't want to successfully reach your bullshit ad host. I don't want to get successfully served an ad instead of timing out. I just want to fail.

Any other behavior is wrong.

I assume you ask this honestly.

The problem isn't something like "Oh, hey, well, there's some empty space so let's setup a lemonade stand until someone buys up the property." The address is supposed to be valid, or fail fast. It is of much greater utility to everyone (except the ad farmers) to fail fast.


>It is of much greater utility to everyone (except the ad farmers) to fail fast. //

No it isn't yours in an extreme minority edge case compared to most internet users. Users don't want a blank page with just a weird code on it.

They want links to click to get to the page that they meant to type in, or failing that a similar page link supplied by their ISP ...


If users want that, it can and should be solved in the client instead. One of the really nasty things about ISPs doing this is that it actively prevents clients from doing it, since they can no longer easily identify a nonexistent domain.


You might be surprised to learn that the Internet Protocol and the DNS are used for many other things besides serving web pages to browsers.


They want to see a useful message from their browser, like:

This page does not exist. Did you mean '...'?

Which most modern browsers give them provided their ISP does not hijack the NX response to present their ads.


Except that's what the ISP, or DNS provider, is providing - a page indicating the domain isnt there and giving other options.


You're silly, that's silly--stop being silly.

This "average user"? Frankly, we've had the 'net for like fifteen years--now's a good a time for users to learn as any.

Unless, of course, you prefer to spoonfeed the next generation of consumer whores?


We're talking about a barely vsible change for most users in which instead of having to retype a domain or repeat a search when the domain name is a typo (or the domain has closed) a list akin to a SERP is provided.

User make mistakes, domains are abandoned - for those cases this provides a simple service which reduces user effort.

Do I want to spoonfeed a generation of consumer whores? Would love to. But I suspect they like errant toddlers would spit out my message of anti-consumerism, sustainability, social action, fair trade ...


Article author here. These ISPs are violating long-venerated network protocol. As for who it hurts, it's a highly annoying experience for folks who are used to having the browser -- not their ISP -- handle errors for non-existent domains. In addition, there are a number of other ways in which this breaks the Internet; feel free to refer to the bullet points under "Examples of functionality that breaks when an ISP hijacks DNS": http://en.wikipedia.org/wiki/DNS_hijacking


Suppose you want to ssh into a host and you mistype the hostname. What do you want the result to be? Do you want your resolver to return NXDOMAIN immediately so that you can quickly identify your mistake, or do you want it to return a bogus IP address that drops incoming SYN requests to the sshd port, causing you to think the remote host is down?


Interactively viewing a web page is far from the only thing that DNS lookups are for. Breaking the standard (by returning an address when NXDOMAIN is really the correct response) may break other protocols.


Wanting to access a website is not the only reason to resolve a domain.


I'm on Time Warner and just tested this. Could not reproduce.


Could be region-specific. I get the impression that Time Warner Cable's management isn't highly centralized. These tests were performed on Time Warner Cable's SoCal network; I just ran them again with the exact same results, despite the fact that I have my DNS servers set to 8.8.8.8 and 8.8.4.4 (Google's DNS servers).


Time Warner must not have their acts together. Just ran a test using their default DNS (not google), was able to reproduce. Ran a test on Google DNS, could not reproduce (receive an unknown host error). Also in SoCal (TW San Diego).

Not doubting you, just wanted to add another data point.


I want to know whether the false NXDOMAIN (saying the domain is actually present at the address of your isp) is dnssec-signed.

If it isn't, oh well.

If it is, this is an exploit and is big news.


Don't bloggers have to disclose affiliate links now?


No subterfuge intended. Changed the post to make it clearer.


Doesn't this mean ISPs themselves couldn't use their own DNS servers for a reliable DNS service?

Talk about not drinking your own Kool-Aid...


Won't DNSSEC stop this, hopefully?


Run your own nameserver(s)?


Won't help. If the ISP is intercepting all outbound DNS queries, it will also intercept those made by your server.

In case it's not clear, even non-forwarding DNS servers have to make DNS queries to the authoritative servers of each domain.

It may stop invalid results for top level domains (queries to the root servers might not be intercepted), but would likely still serve ads for invalidsubdomain.validdomain.com.

This is one of many reasons why we need broad DNSSEC adoption.


Broad DNSSEC adoption would have to include NSEC, which is needed to authenticate that the resource does indeed not exist. More on NSEC here:

"All records are signed offline. When a nameserver receives a query it looks up the answer plus the signature and returns the two (RRSIG + RRset) to the resolver. The signature is thus not created in real time. How can a secure-aware nameserver then respond to a query for something it does not know (that is, give an NXDOMAIN answer)? The only way to have offline signing and NXDOMAIN answers work together is to somehow sign the data you do not have.

In DNSSEC this is accomplished by the Next SECure (NSEC) record. This NSEC record holds information about the next record; it spans the nonexistence gaps in a zone, so to say."

Source: https://www.cisco.com/web/about/ac123/ac147/archived_issues/...


This is outdated information. The NSEC record did confirm the nonexistence of DNS names. It also confirmed the identity of every record in your zone; in other words, it allowed anyone in the world to dump your whole zone.

The namedroppers working group eventually conceded that, no, it wasn't OK to disclose every domain name signed under DNSSEC; that, for instance, virtually every large enterprise in the world had made a practice of setting up dual-facing DNS so that the world only saw a preapproved subset of their name.

And so we got the NSEC3 protocol, which uses a hashing scheme similar to Unix password files. So now, instead of directly dumping domains, attackers get to crack them. Daniel J. Bernstein has a couple presentations about how easy this is.

DNSSEC is a bit of a debacle.


It does indeed seem that way, unfortunately. Is too optimistic to think a better standard might emerge?


Still wouldn't help, I don't think. Your DNS server is likely just a caching server, which means it forwards requests to another, third party server.


Mine isn't. I would assume that if you wanted to set up a nameserver you'd be able to handle the slight change of adding a root.hints.


You can install PowerDNS recursor[1] or BIND on you own server and use that instead of your ISP/google DNS.

[1]http://doc.powerdns.com/built-in-recursor.html


You can set up your own resolver - I've done this in the past when I've been with an ISP with dreadfully slow nameservers (they weren't doing any interception, just the service was bad).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: