Hacker News new | past | comments | ask | show | jobs | submit login

Many years ago someone (possibly/probably Dan Kaminsky) suggested storing your gpg-encrypted+signed full device encryption key in the global DNS cache. If you don't do a lookup every X days, it'll expire from the cache and the drive will be unrecoverable assuming no other copies of the key exist.



> global DNS cache

No such thing exists

> If you don't do a lookup every X days, it'll expire from the cache

Requerying a cached DNS record doesn’t extend its TTL. TTL is based on when it was first added to the cache.


You're not wrong, but it has been shown that you can store small amounts of data in open resolving dns servers.

https://github.com/benjojo/dnsfs


That's definitely true and I didn't dispute that part, I focused on the parts which were factually not correct.


I'll be the first to admit that to a DNS expert my original phrasing was not fully precise or fully complete, but calling it "factually not correct" is unfair. As the GP hints the idea is that you cycle through a large number of open resolvers around the world, putting the key into, let's say, 10 of them each time for redundancy & availability. As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.


> As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.

Think about that from a technical perspective and you’ll realize the flaw. :)

You can’t cycle to a new set unless the authoritative server is still responding with the key. If the authoritative server still has it, what difference does the fact a caching name server has it? Furthermore, there’s zero guarantees a caching resolver will cache for the length of the specified TTL, so you literally have a land mine that’ll explode randomly and cause you to lose your data.

> As the GP hints

Read it again. GP’s Github link doesn’t allude to what you imply it does. Storing arbitrary data in DNS is of course possible and others will cache it for you, but implying anything like what you described as feasible just doesn’t hold merit.

> but calling it "factually not correct" is unfair.

This entire theory you posted originally doesn’t hold up to even basic technical review. It’s nothing against you personally, the idea simply doesn’t provide any actual benefit and very fairly is factually incorrect.


> You can’t cycle to a new set unless the authoritative server is still responding with the key.

Yes - during a refresh you'd (re-)add the key to an authoritative server that you control, query it from X open resolvers that you do not control, then delete the records from the authoritative server that you control, such that the only remaining copies are held by the open resolvers. Care would be taken to make the key forensically unrecoverable on the authoritative resolver whenever it's not participating in this refresh process.

> there’s zero guarantees a caching resolver will cache for the ... [full] TTL

To deal with that you can test them first using useless/random data, and use many (10+) to deal with the risk that policy changes after your test. The hope being that it's unlikely for all 10+ to go offline, time out, etc, before your next refresh. But it is true that some availability risk is the price you must pay for the "unrecoverability after X seconds, using HW you don't own" benefit of the scheme.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: