Hacker News new | past | comments | ask | show | jobs | submit | mercora's comments login

For hard cutovers it might be a viable strategy to forward or redurect traffic inbetween changes. That is, either let the old destination forward to the new, or vice versa, then update the records to the new destination, or have an intermediary forwarding destination where you can change the destination address on an an instant and once settled move the record to that.


The recursive resolver you describe would adhere to the same TTL as would do any reasonable public resolver. The difference in cache behaviour, if any, only depends on if the resolver already has a cached record that's still valid or doesn't have it. That it won't have it just happens to be more likely as the amount of requests your resolver received is smaller as the case if you are it's sole user. It's possible to force the behaviour you described by using specialised tools that are meant to be used for analysis like binds dig utility with its trace flag. It can bypass any resolver by querying up from the root servers to the designated label without any caches being involved. You still only will know that other resolvers will receive the desired answer eventually. Only safe bet is to assume it will take the TTL until every will receive the updated record.


That's correct, "directly query the authoritative nameserver yourself" is precisely what dig +trace does.

> You still only will know that other resolvers will receive the desired answer eventually.

That's also correct. DNS is an eventually consistent system where if you stop updating the authoritative records, all resolvers will eventually converge to the latest answer once their cached records expire (presuming that they actually respect the cached records' TTLs as expected).


24h seems overly excessive but some resolvers may refuse to adhere to arbitrary low TTL and chose to answer with stale records from cache for as long as they deem necessary. 24h certainly would make many issues with that strategy very apparent.


agreed.


It's supposed to be on another independent device.


Doesn't have to be. While storing them on your computer does not protect you from an adversary with access to your computer, it still protects you against an advrsaey e that intercepts (or guesses, maybe after a breach) your password.


It doesnt have to be yes, but it's called 2 factor auth because of the reason that your computer is 1 factor and another device is 2.

It won't protect you from the intention 2fa was created.


For what it’s worth, whilst your point somewhat stands, generally just 2 devices are not considered 2 factors.

Usually, the factors are considered as:

- something you know (e.g a password)

- something you have (e.g. a device token)

- something you are (e.g. a fingerprint or other biometrics)

Single factor with uses just one of these, which is why you can unlock your phone with either a passcode and a biometric with the same level of security (when talking about factors)

Two factors should have two unique ones of these, and in this case a TOTP generator on the same computer as you are logging in on is fine because the computer counts as “something you have” and the password you enter counts as “something you know”. An attacker who takes your computer still only gains 1 factor (disregarding secure enclaves and password protection etc) and doesn’t have both.

Of course if an attackers manages to access both your password manager and your TOTP generator (whether or not they’re on the same device), then both factors are compromised because the “something you know” factor has been broken due to the things you know being stored somewhere.

Of course, the way you practice the security of each of the factors is important and can vary greatly depending on how you effort you want to put in to it. For instance, keeping TOTPs on just hardware tokens which you never keep plugged in protects against your device being stolen.


E-mail or sms codes are not 2fa then either, if the attacker has your device (presumably with the e-mail app logged in already and the password saved). But this seems like a dubious distinction, its like saying 2fa is no longer 2fa if the attacker has access to the second factor. Thats not particularly remarkable.

You can call it 2sv, though. Two step verification. But a user can certainly chose to use in a way that makes it 2fa by storing the totp secret on a dedicated device. The bottom line for most use cases is that it stops people from getting in even if they guess or crack your password.

With hardware tokens, it still has tradeoffs. What happens when the “user” (read attacker) claims they lost or damaged the yubi key? What factor do you use to verify them before sending a new yubikey in the mail? What happens if someone breaks into the user’s mail? Etc. no method is perfect.


The second factor isn't about a second device. It is additional to something you know (password), typically the second factor is something you have (device, yubikey, etc.).

The idea being that the intersection of {people who can get your password, such as through phishing or other digital attack} and {people who have physical proximity and can steal your physical device} are typically much smaller than the set of people in either category.


>something you know (password)

Conveniently saved in your browser :) Might not be easy to extract from a logged-out device, but grabbing the device quickly can bypass both "factors" simultaneously.

Makes me wonder how functions like CryptProtectData protect against physical disk access with hex editor. The hash of the login password can be changed to anything and obviously they cannot access the actual password since it should be destroyed after hashing. So unless TPM is involved I don't see how it can be secure.


> Makes me wonder how functions like CryptProtectData protect against physical disk access with hex editor. The hash of the login password can be changed to anything and obviously they cannot access the actual password since it should be destroyed after hashing. So unless TPM is involved I don't see how it can be secure.

It derives a key from your password when you log in. Changing the authentication hash will only let you log in, not figure out what the key was.


Oh that's smart, not storing the password anywhere but using the user as an input source for it.


Even if the TPM is involved, it can be cracked. But as with any hack, once someone has physical access to your computer, all bets are off.

The odds of someone stealing your computer to hack into your accounts instead of simply selling it on eBay are practically zero for most people.


can it really? the abilities of TLAs is unknown, but RSA with a properly sized key isn't known to have many weaknesses. that doesn't mean there aren't any sidechannel attacks but your average thief isn't going to be able to break into an encrypted hard drive on any reasonable amount of time, even if they have physical possession of the device. Or so I'm lead to believe. if you have evidence to the contrary, I'd love to hear!



You can configure the agent to confirm each key usage to have your cake and eat it too. :)

It's also good to see if any malicious process tries to make use of the agent locally!


i used to use [0]s3ql on-top of "slow" fuse storage. it comes with its own caching layer and some strategies around handling larger blobs of data efficiently even at high latency with ease but its a non shared filesystem. you mount/lock it only once at a time. otherwise this was a perfect solution to me at the time.

there is also [1]rclone with its own caching layer and own support for various backends directly. I don't remember anymore why i did prefer s3ql though, but i usually have some reasoning with things like this..

[0]: https://github.com/s3ql/s3ql [1]: https://rclone.org/


>As a side note, this is why we built Booth.video -- to demo that this isn't a fundamental tradeoff and it's possible to have E2EE, metadata-secure video conferencing in the browser.

now i wonder how you did that. Is the key exchange of participants happening out of band?



i think it cleared a thing up or two. However, would you mind sharing why insertable streams are apparently required for this to work? As WebRTC traffic is encrypted already E2E it seems to me that constructing the SDP with the key, currently used here with insertable streams, would be good enough.


Sure. So WebRTC is encrypted between peers when 100% of the communication is going peer to peer. But in most WebRTC services, your peer is actually the SFU, which is the server. So you're encrypting to the server, not to the other participants. (Most "pure" WebRTC platforms switch over to SFU-based communications at 4 or more participants, but many of the bigger platforms always send video/audio through the SFU regardless of how many participants there are.)


E2EE implies both ends have an encrypted channel to transport data to each other directly, without an intermediary step. this is the very definition of the term, at least it is in my mind. Having the data only encrypted to and from their servers would merely be transport layer encryption. Although i have no idea whether they implement one, the other or both.

In context of video conferencing software (WebRTC specifically) this is actually somewhat interesting, because typically the signaling server is the one who hands out the public key of the other peer and needs to be trusted, so they could by all means deliver public keys to which they posses the keys for decryption and it therefore would allow them to play man in the middle in a typically relayed call. So even if E2EE is implemented, it might be done poorly without figuring out how to establish trust independently.


Yeah, the key delivery is the hardest part if you are privacy focused. Signal and Whatsapp have a screen, where you can generate a QR code, and use that to verify that you and your contact have exchanged keys without a man in the middle attack.


I wish browser would do something similar with their WebRTC stack. Something that shows independently of the site (out of its execution context) which keys are used and allow for an easy comparison of them independently. But i don't know of such functionality being there yet.


i think SRV records might be more appropriate for this use case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: