Hacker News new | past | comments | ask | show | jobs | submit login
A DNS hijacking wave is targeting companies at an almost unprecedented scale (arstechnica.com)
112 points by Elof on Jan 11, 2019 | hide | past | favorite | 101 comments



The "clever trick" seems to be:

previously compromised the login credentials for the administration panel of the target’s DNS provider

or

previously compromised domain registrar or ccTLD

Unless I'm missing something, given either of those things, doesn't take much cleverness...


It's some kind of alarmism over letsencrypt ... "letsencrypt will give tricky attackers a valid certificate for a domain!!!" (if they get control over the domain) (... certs have almost always been granted based on control of the domain, though historically it mostly MX records ... so attackers could do pretty much the same thing 15 years ago)


I believe this is why letsencrypt certs are only valid for 3 months. Personally, I'd like it monthly.


I was under the impression that it was more to get admins to automate the issuance and have it auto renew than manually issue and forget and let the cert expire.

If I have control of a dns or register control panel I can use any of the other free ssl certs out there for an attack.

Comodo Will give you a valid cert for 90 days as a trial, others will give you 30 days. AWS cert manager (ok iirc I can only use those within AWS but the point still stands. And if I’m being naughty it’s not gonna be hard to acquire a few stolen CC to bill my AWS usage too) will give you a year. All for free. WooSign / StarCom used to issue free 1 year certs (I think wooSign might still do but they are issued by another CA).


This also fixes certificates lasting too long after domains change hands, allows faster deployment of certs with fixes and other new tech. There are many benefits to short expiry.


Yeah but you used to need a credit card. The barrier to entry is lower. HTTPS is a tire fire.


Your completely right, where on Earth would criminals and scammers be able to get a credit card? /s

I am having a hard time understanding how more websites using https could possibly be a worse thing.


Lots of people still have the mindset that http is the default and https signals a high level of trustworthiness.

What we're shifting towards (and maybe already are there) is https is default, and an http-only site is an amateur setup, on the same level as hosting your site via IP (no domain name at all) or with a url like http://myisp.net/~mycompany/default.htm.

Like it or not, browsers are treating sites like this now, and you don't see many "Secured by TrustworthyCo SSL" golden padlock badge images anymore.


It’s even worse now that a lot of online services record your card so it will be easier for valid user to pay a service.

If your credentials are compromised, even a paying certificate is easy to get.

On a small company, the payment might be noticed. On big companies the odds are low.


Yeah, yeah.

So many on HN has this mindset. Criminals just whip up credit cards like it's nothing. They don't. It's noisy to use some grandma's credit card to buy a cert for buttsnstuff.ca when she donates to her local church five times a month. Almost all criminals are fucking dumb or even if they're smart they fuck up before they're good and land themselves in jail. Like at least 98% of them.

HTTPS is a tire fire. Root certs by hostiles. Near-trivial PDAs. Termination at network edge. OS installed certs. Even when it "works" 95% of the packets are on shitty, broken ciphers with no forward secrecy. And that last 5% is built on PKI which we know quantum is breaking pretty fucking soon. Almost nobody rotates access tokens. Almost nobody layers encryption. Almost nobody safeguards certs by locking down permissions. Almost nobody pins them. Almost nobody uses HSTS.

Everything is shabby and shitty and breakable. Let's Encrypt is the wrong solution. It helps irrelevant hobby websites and increases risk for medium sized companies that tried to do the right thing.


Criminals who hijack websites do in fact whip up credit cards "like it's nothing". A huge chunk of abuse attempts on websites that process transactions with credit cards is performed simply to bulk-verify stolen cards. Not even to buy things with the cards; just as a sort of scammer mapreduce to see which of their zillion cards work.

The idea that credit card forms are a form of defense in depth is lunacy.


Interesting. Could the credit card industry take advantage of this by setting up honeypots - sites that look easily exploitable to the average crook, but that would actually provide card issuers with a list of stolen cards?


They would have to change the honeypots often.

Scammers are fully aware that banks will also use the data to find other stolen cards.

You get reports that 100 people what their details breached. You look though their transactions. The one thing they all used was momandpopsidebusiness.com for small transactions.

These were prob test transactions so now you start looking for other customers whose cards fall under the same pattern and let the other banks know of your findings.

So scammers will use many “test sites” so if one get found it doesn’t it doesn’t knock out their whole batch.

And word would also quickly spread that cctesterrorscammers.com is a honeypot used by the banks and card processors.

You could look at it like they don’t need the honeypots at all as they just need a few customers to report activity on their accounts and then they can start looking at the data they already have on file.


I had a large response typed up to this, but I'm sick of this. I've worked on too many projects where I saw which countermeasures worked. Involving the financial system brings in their detection methods. It's not perfect, but it keeps out low level grifters and it increases friction.

Most criminals are dumb opportunists.


how does let's encrypt increase the risk for anyone?


Because it makes it a little easier to script forging an HTTPS cert. People here think that is easy, but it isn't. It involves real humans and real bullshit interfaces. If you get nailed with using a fake credit card its easy for any government to figure out that you also pwnd a website, but people here think in all-or-nothing terms for almost everything.

I've worked on multiple projects with credit card fraud. I've helped the Canadian government with both cybercrime and machine learning. When I say 98% of criminals are dumb, I really fucking mean it. Not everyone is USG. Most governments are worse-resourced than your run-of-the-mill startup. But people don't want to hear that scriptable HTTPS has downsides and people that are in positions that come with social cache rarely listen. They end up becoming the next generation of people with blinders on. I helped with projects that threw over a dozen people in jail. We got them on two things: IP addresses and financial transactions. Let's Encrypt takes away one possible way we could have gotten them. But people on HN are so deluded about what actual crime looks like.

You know how the vast majority of programmers are these dumb PHP coders that cobble together a Wordpress site? Crime is the same thing only worse. They have no fucking clue what they're doing. They bruteforce passwords and use exploits that target long-out-of-day vulns.


Something I'm always interested in is how vulnerabilities/attacks are leveraged once a system is compromised.

Obviously if an attacker controls DNS they could do all sorts of bad things: setup a phishing site that looks official, put up a fake press release announcing a merger or sale to manipulate the stock price, hold the domain hostage, etc.

Most of those actions (while bad) would be detected pretty readily. What's clever here is that the attackers _maintain_ a working proxy back to the real system for an extended period of time. Time during which they can then inspect the traffic and peel out sensitive data, presumably to re-use those credentials to break into other systems and escalate further.


This is not a clever trick. This is just a count compromise. Don’t use a shitty domain registrar or a shitty DNS hosting service and for gods sake use multi factor authentication on what is the most crucial element of your online presence.


That's why I placed this Ask HN a while ago:

https://news.ycombinator.com/item?id=17704828

I pitched that idea at startup school, and got accepted. However, after a while I pivoted to something else, as I decided that the name registry and DNS market is just too crowded. I was afraid that we'd be spending 99% of the time and budget on convincing people they need secure domain/DNS management, instead of building the technology.


Security could be an advantage for a variety of online services. When selecting an online service, I'm starting to use U2F 2FA support as a way to narrow the list of services to consider, so making a decision is easier. Few domain name registrars offer U2F from what I can tell.


Note that these attacks involve compromised accounts with authority servers, so despite being the most visible and impactful DNS attacks of the last few years, DNSSEC would have done little to defend against them; in fact, even in the DNSSEC fantasy-world where DANE replaces X.509 CAs, these attackers would still have accomplished their goals.


after reading the headline I immediately thought of "14 DNS Nerds Don't Control the Internet" [0].

[0] https://sockpuppet.org/blog/2016/10/27/14-dns-nerds-dont-con...


That article is peddling bullshit.

Yes, DNSSEC is not adopted. But what the intention with it is to stop people hijacking DNS requests (re-routing then to rogue servers for instance,) and then returning spurious answers.

That’s a relatively simple attack, and it can have fairly serious reprocussions. Just return an A record for the domain and host straight HTTP for example. Or re-divert emails with MX records. Publish fake CAA records to bypass that safety lock if you want to supply a cert obtained elsewhere.

The stuff about the US Govt controlling sites is the most facetious of all. As the original (non) story above shows, controlling the DNS is all you need to control a site in the X.509 world. Extended validation is a joke, controlling the domain is the only barrier to getting TLS certs issued for any domain.

We implicitly need to trust the root DNS. That’s a given. So why couldn’t it be the root of trust for secure browsing? Browsers trust something like 1500 CAs out of the box these days, is it really better to create a system where that many orgs need to be honest, and not get hacked, to be effective?

To claim that the current system, with no way to for people know the DNS answers they receive are valid, poses no security risk, is extremely foolish.


Here's a story about a DNS hijacking attack unprecedented in scale for which DNSSEC is powerless, and your conclusion is that DNSSEC is an important priority.

If you believe control of the DNS is straightforward without DNSSEC, and that control of the DNS is all you need to get an X.509 certificate issued, go get a GOOGLE.COM certificate misissued. Or FACEBOOK.COM. If you actually manage to do it (you won't), turn the timer on your iPhone on so we can measure how long it takes for Google to kill the CA you got it from, with no notification or further intervention from you.

We do not implicitly trust the DNS roots. In fact, it's a core feature of modern Internet security (modern since the late 1990s) that we do not trust DNS at all. It is a small faction of standards zealots, whose pet standard failed for almost 30 years to either gel or get traction in the market, who have decided that their spurned work turns out to be critical to all Internet security, and they're the ones revisiting that long-decided question.

You made this argument in, I think, 3 other places in this thread, and I'd just like to say that I put some effort into making sure my rebuttals relied on different arguments each time. Collect them all! I wrote them I think a little snarkily, but I tried to exceed the bar you set by claiming I'm "peddling bullshit".


Sure for google.com it’ll fail. But you could do it for many, many others. The reality is that control of a zone is all it really takes for someone to get a cert issued for it. In that context you are most certainly dependent on the accuracy of the DNS.

I didn’t for one minute suggest DNSSEC would help in relation to the attack detailed in the article.

I am just saying that to claim securing the DNS is pointless is, in my opinion, a fallacy.


[flagged]


This response breaks down as follows:

1. An incivility directed at me.

2. Another incivility directed as the people whose sites were hijacked.

3. The concession that a misissuance of GOOGLE.COM or FACEBOOK.COM would be detected and unlikely to be successful.

4. The claim that that's only true for sites like GOOGLE.COM and FACEBOOK.COM without further refinement or evidence.

5. Five paragraphs of irrelevant detail about the mechanics of Google's response to a misissuance that have nothing to do with his or my argument.

6. A repeat of the concession from (3).

7. A final claim that a CA getting killed, as Google recently did to the largest, best-known CA in the market, is a "Hollywood Action Thriller style sequence of events", to which I will only respond, check out "First Man", it's great, and a much more interesting show than watching Google respond to misissuance.


> misissuance of GOOGLE.COM or FACEBOOK.COM would be detected and unlikely to be successful

Eventually detection is almost certain, but whether it's "successful" would depend very much on what somebody was doing with it and why.

We have some examples to work with in analysing this, where certificates for Facebook or Google names were issued at various times without Facebook or Google knowing about it - and maybe I'll do that analysis later, but for now I want to focus on your Hollywood Action Thriller scenario.

Google did not "kill" the "largest best-known CA in the market".

Back in January 2017 Andrew Ayer wrote to m.d.s.policy about some certificates Symantec had issued for names like example.com (sic) which Andrew had verified were not asked for by example.com's legitimate owners. This gradually spiralled, with Mozilla producing a fairly substantial document listing well over a dozen distinct problems, both newly discovered and dating back a little way, with Symantec. Overall the impression we got was that Symantec management were not delivering the oversight role needed to ensure their CA achieved what a relying party should expect.

Symantec management didn't like where this was going and tried to "go over our heads". I have no idea whether this worked for Microsoft and Apple, and for me there isn't anyone "over my head", but at Google it appears to have made things worse.

In summer 2017 Google's plan asked Symantec to replace their infrastructure and institute bottom-up change to their organisation in order to restore our confidence in the CA. For practical reasons (it's hard to stall your customers for perhaps 1-2 years while you fix things) Symantec would have needed to continue selling certificates during the period when we did not trust their management to operate a CA, and so they'd need to find another large CA to provide us with the assurances we need while retaining Symantec (or Thawte, Verisign, etcetera, all brands of Symantec) branding.

Symantec negotiated with DigiCert to provide this capability over summer 2017 (very small Certificate Authorities would not have been able to practically do what was needed) but at some point during that negotiation they pivoted to instead selling the business to DigiCert.

Once the initial agreement existed in October 2017, DigiCert and Symantec sought permission to go ahead, and received it on some simple conditions (Mozilla's concern was that this might be something akin to a "reverse take over" in which Symantec would dodge the intended management changes and instead seize a new brand, key people at DigiCert were able to assure us that this was not going to happen), then all the usual business stuff happened, and in parallel DigiCert began building a new issuance infrastructure for the ex-Symantec brands, more or less as they would have under the original concept but with them keeping the profits.

In practical terms Symantec chose to exit the CA business a bit less than a year after Andrew's original post to m.d.s.policy, after many months of discussion across about all the issues raised.

Now, if you want you can speculate about how _hard_ it is for incompetent and untrustworthy people to become competent and trustworthy, but Symantec decided they weren't interested in that path so we'll never know. Nobody killed them, they decided they weren't interested in reform.


This is just more irrelevant detail. Your essential rhetorical strategy here is to concede the argument I've made, but pretend otherwise by marshaling hundreds of words of details that don't address the point you're claiming to rebut.

Nobody cares who wrote to m.d.s.policy about the misissuance or the precise dynamics of Symantec getting out of the CA business --- though surely you'll want to claim otherwise to preserve the notion that you've rebutted me.

The simple facts:

* Symantec was a full thirty percent all of TLS certificates in 2015.

* Google was made aware (through multiple channels) of misissuance.

* Google arranged with Mozilla to distrust Symantec.

* Symantec is now out of the CA business.

If you're trying to claim that Symantec is out of the CA business because it simply wanted to be, and so somehow gracefully exited by selling to Digicert, no, that is not what happened.

Otherwise, none of the detail you're offering has anything to do with this thread.


Your claim was that Google would "kill the CA you got it from" if somebody obtains a certificate for the name GOOGLE.COM and that they'd need to "turn the timer on your iPhone on so we can measure how long it takes" with "no notification".

I've explained this is ludicrously far from reality, spelling everything out so that people can see this imaginary lightning fast reaction doesn't exist. Would the GOOGLE.COM certificate itself get revoked? Yeah, probably. Might even happen the same day if you're lucky.

Would anything at all happen to the CA, ever? Probably not, though it would depend on what exactly the sequence of events was. If it did, as we saw with Symantec it would take months to decide what that should be, and it's very unlikely to be a complete distrust.

Your scenario is something that belongs in a thriller, I gave a nice example where a Vernor Vinge novel does almost exactly this, in a fictional future California, and I explained that er, no, that's not how it works. You are welcome to keep living in a dream world, but if you're going to threaten people with imaginary consequences for doing things you don't like, maybe say you'll launch a fireball at them with your mind or something so nobody thinks you're talking about the real world.


I'm pretty comfortable at this point with what this thread says about my argument and your rebuttal and am happy to leave it here.


What a strange article. I thought it was leading up to saying that control of DNSSEC is decentralized, or has a transparency process, or something.

But instead of 14 nerds, it's the US government (for .com).

I need to read up on DNSSEC.


>But instead of 14 nerds, it's the US government (for .com). //

So it's much worse than it being 14 random nerds then!

/not-sure-if-joking


There's a more technical takedown of DNSSEC linked at the bottom of that post.


Much better than (CA0 || CA1 || ... ). All it takes is one CA out of 10s of independent CAs to misbehave to insecure whole tls.

In DNSSEC/DANE, world only has to watch one entity rather than 10s of entities.


No. You have to trust all the CAs, and the governments that control the DNS.

https://www.imperialviolet.org/2015/01/17/notdane.html


> No. You have to trust all the CAs, and the governments that control the DNS.

Not in DNSSEC. .xxx need only trust dnsroot. yyy.xxx need only trust .yyy and dnsroot.

firefox/chrome/etc with support from important orgs with high value names (google.com/bankofamerica.com/etc) would then make sure that dnsroot/.com/etc do not abuse the trust. They have incentive and methods of punishment. There is no legal authority that clients need to map DNS . to existing root keys. A client can map a.b.c to any key it wants.

The risk of gov overreach is same for both tls and DNSSEC. DNSSEC just trusts fewer entities. The only people who benefit from current system, are CAs who are getting $$$ for nothing.

> https://www.imperialviolet.org/2015/01/17/notdane.html

This is orthogonal. Weak Keys are not required or implied characterstic of DNSSEC.


If browsers start mapping cert trust to something besides the DNS roots... it’s not DNSSEC, it’s something else entirely, it’s “our current system, maybe with some slight tweaks”


I am not suggesting every client do their own mapping, that is not a naming system at all. There has to be very large consenus for a naming system to be effective. I just pointed that out to show that dns is not under any gov control. Its under a control of an entity that can be punished.

However who gets to have dnsroot is just a value of a config in DNSSEC. The value itself should not be used to criticize DNSSEC cause its changeable.


How do you punish .com if they misbehave? Move every site off .com?


No. You just map .com to another key with an agreement that new .com owner pre signs and map existing .com subs the right way. An unaware xxx.com does not need to do anything. As long as its done publically with a bang and enough consensus, disruption should be minimal.

Again this is unavoidable in any system that need trust. Thats why I like PoW DNS.


Who is "you"? The people we're afraid of manipulating .COM control the DNS. Google can't "map .com to another key". Their option would be to leave .COM; that is the gun DNSSEC would give to the USG to hold against Google's head.


You is firefox/chrome/etc. Yes you can. The ownership of .com is not as exclusive/protected as .xxx or xxx.com. Thus the firefox/chrome/etc can map it to anyone they feel. Considering so many high value .com subnames, .com can be transferred to neutral party or even dnsroot. USG do not own ".com" string. No one does. Just like ".".


Your claim here is that a browser vendor could somehow fork the DNS and use its own .COM? Explain how that could possibly work.


Anyone can fork DNS. Its just a (name, key) map. As long as its done with enough consensus, it can be done. Mismanagement of .com is serious enough to demand that kind of change.

Lets say .com gets mismanaged. Community is infurious. firefox/chrome/etc demands that . remap .com to new more trustable entity. If . does not. firefox/chrome/etc then remap . to new more trustable entity, because .com must be as trustable as ., because .com is that important. New . give back ownership of all tlds to their previous owners. Except for .com. .com goes to the more trustable entity as intended. New .com then does again similar import of all good xxx.com.

In this whole incident, no one loses the ownership of their names except for .com and possibly . .

Now no gov can touch *.com. Though its different for cctld. Those are owned by their respective govs. Same goes for gtld. But no one gets to mess with . .com .org .net.


It sounds like what you’re proposing is for browser vendors to, in unison, overthrow IANA and the related organizations and stage a coup where they start running their own DNS root authority. And then claiming that this would happen without impact to end users / owners-of-individual-domains.


Browser vendors (specifically all DNS users) have the option. They can do it, if IANA fails at the job of being a dnsroot. Disruption is inversely proportional to consensus. If everyone do it, there is no disruption. Some disruption is unavoidable. Its fair price to pay for stable and solid global naming system.

Ultimately its about deciding who gets to own "x.y.z" string brand globally/contextlessly. World obviously need a single naming system. Either that or expect to have multiple owners to "google.com".

My suggestions are required otherwise why would someone build a global brand if ownership is not safe or guarnteed enough. Future is way more chaotic. Without crypto, a global naming system is not going to survive.


OK. I don't think we have to debate this any more. We can just leave it here: you think DNSSEC is a workable solution to our problems as long as the browsers can, if they ever need to, create their own alternate DNS for the web.


The linked article is about why you can’t simply trust the DNS roots, even if you were naive enough to want to.


If you can’t trust them then the whole thing crumbles anyway.

All you need to obtain valid TLS certs for any domain is to make a CA think you control the domain. So the CA’s trust in the DNS root is already functioning as the basis of X.509.


You manifestly can't trust them today, couldn't yesterday, or for the last 30 years, despite the rise of e-commerce and the gradual shift of all applications to the web with its domain-validated WebPKI. Google doesn't DNSSEC sign. Facebook doesn't DNSSEC sign. No major bank I've found DNSSEC signs. AT&T doesn't DNSSEC sign, nor does Verizon. Some part of Comcast does, or did, and the net effect was that DNSSEC errors broke HBO NOW on launch day for Comcast users (and only Comcast users).

Tell me more about how the whole thing crumbles away? Because I'm pretty sure I'm typing into a TEXTAREA on the real HN, and not some facsimile a DNS hacker created to fool me. The Internet seems to be working fine without the government-run PKI you're saying we have to have.


That’s not pure DANE being discussed by a hybrid in which CAs are still playing a role.

In pure DANE you need only trust the DNS root.


The whole premise of AGL's article is the fact that you can't have pure DANE. Literally, "a hybrid of DANE and CAs" is just a restatement of the sentence "you have to trust all the CAs and the DNS". You haven't said anything in this comment.


Except if that one entity misbehaves, even if you catch them, you can't do anything about it, because they own the TLD.


Yeah but because they own the TLD they can get X.509 certs issued for any domain under it, because controlling the domain is the only check CAs really perform before issuing a cert for a domain.

The DNS is already acting as the root of trust for X.509. X.509 does not make the scenario of a rogue TLD operator any different.


You have to trust somone under DNS. The only trustless naming system I can think of is over a PoWChain (eg example.btc).

Still you have 3 choices in DNSSEC/DANE,

  - get a .xxx, trust dnsroot.
  - get a .xxx (when .xxx is as easy to register as xxx.com), trust dnsroot.
  - pick one tld out 1000s and get xxx.ttt, and trust ttt and dnsroot.


I’ve got those choices if I use DNSSEC for my trust, correct. Or I use the existing system, where if a CA misbehaves, we boot them out of the browser trust stores and site operators don’t have to change anything.


The CAs, for the most part, only require you prove you control a domain to issue a cert for it.

So you’re already trusting the DNS, whether protected with DNSSEC or not, in the existing system.


And yet when attackers want to misissue certs for small sites (for big sites, misissuance is detected automatically and gets CAs killed), they don't exploit vulnerabilities that DNSSEC defends against. Why is that? And given that's the case, why pursue DNSSEC?

And how is any of this, any of it all, relevant in a world where registrars can simply speak RDAP to CAs? If you believe the problem is that the Internet will (to use your turn of phrase upthread) crumble away unless we secure the DNS for domain validation, why should we forklift out the entire DNS to do so, when we can just get a small group of organizations to deploy RDAP, something they're planning on deploying anyways, and then add that to the 10 Blessed Methods?

No part of DNSSEC makes any sense.


Because the DNS as it is allows for the potential to do something similar (by getting a CA to accept fraudulent DNS response, leading them to issue a cert,) without someone seizing control of a domain otherwise.

It makes no sense not to try to secure the DNS.


Securing the DNS (a) doesn't fix the underlying problem for TLS (as you can see by the last 2 waves of CA-missuance takeover attacks, neither of which relied on wire-level DNS hijacking) and (b) adds nothing to any secure protocol, which already has to do end-to-end verification today. Despite that, DNSSEC is already the most expensive proposal we have on the table today, requiring every major site and every major piece of software to upgrade or reconfigure.

Deploying RDAP and adding it to the CA/B Forum Blessed Methods gives CA's themselves an end-to-end ability to validate domains, decisively solving the DV problem, and doesn't require any of that expense.

Explain to me again why we should choose the former over the latter?


Except there has to be a crypto proof why Google owns google.com not me. That means we need to secure dns. Then why need CAs at all ? Whats the point ?


A group of certifying singers that aren’t directly controlled by the United States Government is the obvious reason.


Current: Google need to watch all CAs.

DNSSEC: Google need to watch .com and dnsroot.

Which one is better ?

----

(I am ratelimited so posting here rather than reply to the child post by tptacek https://news.ycombinator.com/item?id=18889809)

Of course they can. There is literally no legal or otherwise difference between Verisign and .com. Chrome can do whatever it want, cause its Google's browser not .com's.

In case when .xxx becomes dishonest, you can just move to your own gtld or .more-trustable tld. In current system, there is no concept of ditching a CA. If a CA decided to missmap a name and you are too small, you are fked.

> it’s actually 1, or 1 AND 2

No you can have DNSSEC without CAs. I have explained that already without changing much of the tls. Basically example.com DNSSEC key become CA for example.com. example.com then would create a tls cert in the usual way. No pain.


“You can just move to your own <other TLD>” isn’t even remotely plausible. Any site with worthwhile traffic isn’t going to just forklift to a new TLD and convince all their users to switch over. Imagine if .com was considered untrustworthy and suddenly every user in the US had to use google.othertld, facebook.othertld, etc.


Yeah but if .com is untrustworthy then the game is up.

The operator of .com can use their control over it to get a valid TLS cert issued by any number of CAs.

So the situation is no different currently, trust in the DNS is essential.


Again if that's true then the game is up, because the USG obviously controls .COM; they theatrically demonstrate that every time they take down a piracy site. But, spoiler! The game turns out not to be up.


The former, for several reasons, among them the fact that those actually aren’t the options (it’s actually 1, or 1 AND 2), and the fact that Google can’t end .com they way they did Verisign.

But feel free to ask the relevant team at Google, who will give you the same answer.


There are about 1500 entities in the X.509 game, not 10s.


DNSSEC has the unique advantage of permitting offline signing. If you go this route, even somebody controlling your authoritative servers wouldn't be able to modify your records.


It doesn't matter if you use offline signing for your zone if someone owns up the account you log into to control your domain with your registrar, or owns up the registrar. So no, even with offline signing, DNSSEC did nothing here.

But it's worth keeping in mind that most organizations can't use offline signing, because the duct-tape-and-baling-wire solutions DNSSEC applies to people dumping zones with NSEC records all require online signers.


Depending on the registrar, updating glue records can be a separate process that requires additional authentication. Not long ago my registrar required me to contact them directly to update glue records.

Offline signing is a very useful feature precisely because it makes it easier to differentiate security domains. For example, I could use offline signing for foo.com (along with a registrar lock) but delegate the subdomain dyn.foo.com to a separate SOA that uses real-time signing (or none at all) for use by internal services.

The problem with the modern web PKI is that, as a practical matter, everybody is forced to put their private keys not only online, but unprotected (because HSM and PKCS#11 support isn't that great, yet).[1] Key rotation and certificate expiration doesn't really solve the problem; in fact, rotation exacerbates the problem by 1) forcing you to keep the CA keys online, and 2) incentivizing increasingly loose authorization policies.

Offline signing makes it easier to manage risk in a more robust manner. It's a tool, not a panacea; a tool conspicuously missing from TLS infrastructure. Some newer projects like Wireguard have effectively turned asymmetric key authentication systems into something that walks and quacks exactly like shared passwords. They do it because key management is a hard problem. But I'm not ready to throw in the towel, and the option (both officially and as a practical choice) of offline key signing in DNSSEC is under appreciated. From a security perspective, allowing people to enumerate my subdomains is a small price to pay for permitting me to keep my private keys offline. I don't expect everybody to make that calculation, but it bothers me that people fail to see the value at all.

[1] People faithfully recite the mantra "encrypt at rest" as if that means something. Data at rest is useless. If your data is worth anything then you're going to actually be, you know, using it, and if it's not protected in use then it's all just security theater. This is most clear with the private keys (e.g. stored "encrypted at rest" in KMS) used by cloud services for acquiring access tokens. It's 2019 and industry is still basically using shared passwords--tons of them, a complex web of passwords dutifully pushed around the network by layers of complex software. As if any of it matters to someone who has figured out how to penetrate your network; as if 5 minute or even 5 second password rotation matters to the guy who already figured out how to automate penetration onto your systems.


With the root account on every registrar I have access to, I can trivially defeat DNSSEC for my zone. Tell me which registrar you're talking about where you believe their security controls would make DNSSEC resilient.

(You addressed the first half of my comment and not the second).


Obviously the signing of example.com by .com must be secure itself. Otherwise no crypto delegation is secure, including tls signing.

> where DANE replaces X.509 CAs

Much easier migration actually. Just patch all firefox/etc to accept example.com's DNSSEC key as root ca. Then example.com can create its own tls cert. A very simple and minor patch to tls codebase.


"almost unprecedented" implies that there was an earlier wave of DNS hijacking that was even larger (thus setting some kind of precedent). Was there such a precedent? And if so, I didn't see mention of it in the article.



> The National Cybersecurity and Communications Integration Center issued a statement [1] that encouraged administrators to read the FireEye report. [2]

[1] https://www.us-cert.gov/ncas/current-activity/2019/01/10/DNS...

[2] https://www.fireeye.com/blog/threat-research/2019/01/global-...


Yes. The source reads much better.


Via the article: "(FireEye) advised administrators to take a variety of measures, including:

* ensure they’re using multifactor authentication to protect the domain’s administration panel

* check that their A and NS records are valid

* search transparency logs for unauthorized TLS certificates covering their domains and

*conduct internal investigations to assess if networks have been compromised"


"One DNS hijacking technique involves changing what’s known as the DNS A record."

Could be any record, depends on the intentions of the hijacker. Typically we see web traffic being hijacked to another ipv4 host which indeed, is an A record. Another abuse option could be to alter SPF/DKIM to do a more sophisticated phishing campaign.


And because there are so many SPF records including SPF records including... you can be pretty undetectable.


What ever happened to HPKP? It seems like that would somewhat mitigate these attacks since they rely on using their control over the domain to get a new DV cert. A pinned certificate would at least protect those who have accessed the sites before.


HPKP could have potentiated these attacks by allowing an attacker to pin their certificate while they were in control of the domain, making it difficult for the domain owner to ever fully regain control.



If you're confused as to why, this article was illuminating: https://scotthelme.co.uk/using-security-features-to-do-bad-t...


HPKP was underdesigned; it was a protocol evolution of something Google was already doing semi-manually. There was a competing initiative inside Google --- certificate transparency --- and that won out.

There's validity in the approach and I hope it comes back sometime, maybe with additional mechanism around managing pins.


Is HPKP Report Only also being deprecated?


Being widely phased-out due to being too risky and error-prone. Attacks like this get a lot of media attention, but are pretty rare overall. It's much more common for someone to handle certificates and servers badly somehow, ending up in locking all of your visitors out of your own site, or you never bother with it, but the attacker who takes over your domain does, once again basically hijacking your domain name indefinitely.

I never set it up for any of my sites for the same reason, just too many ways for it to go wrong.


If an attacker can spoof your DNS records (or if they can simply man in the middle the connection between your server and the internet), then they can generate valid Let’s Encrypt certificates.

If I had the time or inclination, I’d write a transparent https gateway that used let’s encrypt to man-in-the-middle http and https connections to servers behind it.

You could imagine deploying something like that on the edge of AWS for mass surveillance purposes, or maybe a misguided white-hat could use it to “secure” http-only services (it’s an improvement in a defeatist sort of way...)


For the MTM scenario, how would you convince letsencrypt’s CA to issue you a cert for any domain? Don’t you need to complete the challenge in order for the CA to issue you a cert?


You’d have to somehow redirect / spoof DNS responses to Let’s Encrypt to fool them and make it look like you passed the challenge.

Not trivial, but far from impossible for as long as the world maintains that securing the DNS is pointless.


This is the problem with quickie SSL cert issuance from "Let's Encrypt". If it took 10 days of consistent domain resolution to get an SSL cert, this wouldn't be happening.


The attack is based on compromising control of the victim’s domain.

What’s to stop someone in control of a domain putting records up for 10 days? It’d still happen, just be a delay between compromising the domain and getting the cert is all.


Presumably the owner of the domain would notice if their site was diverted for ten days.


This is clever in that they evaded detection. But I've seen and been the victim of much more sophisticated DNS hijacks that didn't require any credentials.


Could this be a big setback for "Let's Encrypt" since it uses DNS resolution for its own authentication instead of being a second factor?


All domain-validated certificates use factors you can control if you control the domain, whether email or web or DNS. This has nothing to do with Let's Encrypt.


Yeah, but it is a problem with domain-validated certificates in general that kinda defeats the purpose of SSL.

It seems most of the time that a web site is "hacked" (defaced) somebody changed the DNS instead of attacking the actual web server.

SSL signing can potentially be a second line of defense, but only if having control of the DNS (thus web and email) is insufficient to get a cert.


What are the alternatives?

About 20 years ago, I remember having to go through tons of hoops to get a certificate. Faxing corporate docs and other bureaucracy. That can all be forged.


- TXT record: pointless if your DNS is hacked

- CNAME record: pointless if your DNS is hacked

- Put a file or add a meta-tag to HTML at a specific path: pointless if your DNS is hacked, they can just add/change A/AAAA record and host their own webserver

- Email to webmaster@.. etc: pointless if your DNS is hacked, add MX record


> That can all be forged.

And this is where a lot of the world is heavily behind say Estonia, Latvia (or other countries) that provide cryptographically secure signed documents tied to people, you practically can't forge those documents.


So, the validation done by all publicly trusted Certificate Authorities has to require one of the "Ten Blessed Methods". That's what they're called, though today there are exactly nine of them.

3.2.2.4.1 and 3.2.2.4.5 are now obsolete and no longer used, method 3.2.2.4.11 is basically what happened before the "Ten Blessed Methods" and so now irrelevant.

Methods 3.2.2.4.2 through 3.2.2.4.4 are about contacting somebody based on details from WHOIS by various methods, like sending them a Fax, or giving them a phone call. This is even more laughably insecure than your average DNS setup.

Method 3.2.2.4.6 is the way most people get their first Let's Encrypt cert, and is also a popular option for lots of other bulk CAs, it's about making a change to your web site that the CA can confirm. Obviously they need DNS to reach the site so that's affected.

3.2.2.4.7 puts the change into DNS directly. A better option for Let's Encrypt in most cases, and the only one of these methods that's cryptographically secure end to end (if you deploy DNSSEC).

3.2.2.4.8 turns things upside down and validates based on you having previously proved you control an IP address, then they do a DNS lookup to find that the DNS name you're asking for has an A or AAAA record with that address. This might, maybe, be a good way to get the cert for 1.1.1.1 or things like that, but I will not be astonished if this goes away.

3.2.2.4.9 instead of changing a web page you put a dummy certificate up, created by the CA (not a real cert, it's just to prove you can change the certificate).

3.2.2.4.10 is how the tls-sni-01 (now abandoned) and tls-alpn-01 (new hotness) features in Let's Encrypt work. You do TLS setup, but then you (ab)use that to prove your identity instead of actually delivering a good cert, since if you already had a good cert you wouldn't be trying to get one.

3.2.2.4.12 says basically if you're the DNS registrar AND a Certificate Authority then you can issue everybody who has names under your domain with certificates, since you know who they are.

As you can see, most of these methods depend on DNS, a few don't but are relying on something that makes DNS look like Fort Knox. 3.2.2.4.12 only sidesteps this by making your DNS registrar also your CA, so if they broke into your DNS registrar account they would still get a cert.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: