I registered several important accounts with @icloud.com email, because I was sure that I would never switch to Android besides the fact that the email looked cool (what a stupid reason in retrospect).
I deeply regret using the @icloud.com address for anything serious, because you would be unable to receive or send emails as soon as your iCloud storage subscription expires, provided that you use a larger quantity of data than is allowed in the free-tier iCloud. This is what actually happened to me a few weeks ago. I forgot updating my credit card number on the Apple account, thereby making it impossible to renew the iCloud storage subscription, resulting in the inbox full of emails "Your iCloud storage is full" -- these are what replaced the emails that have been sent to the @icloud.com address after the subscription expired. In addition, I did not receive these emails even after I renewed the subscription. To this day I do not know what these emails were about.
> Today, to encrypt your communication to websites, you use HTTPS which rely on a vast network of certificate authorities.
This fact has been irritating me for a long time. Because no one should believe that every single certificate authority is tolerant to any attempts to steal the private keys. But that is exactly the underlying assumption behind HTTPS being the only way to use HTTP in a more secure manner than exchanging in plaintext.
Let's think about this scenario: Suppose that I built a web service for my personal use and hosted it in public cloud. I don't trust any certificate authorities, so I created my own TLS certificate without using them. I installed my own certificates on the machine from which to connect to my web service. Now the server for my web service is serving in HTTPS using my own certificate. Am I safe? No. Because any entity with access to the private key of any of the certificate authorities trusted by my machine, is capable of intercepting the communication between my machine and my server, simply by MITM.
The problem of being forced to trust certificate authorities can be solved by adding the feature to embed a public key in a url. For example, it would be wonderful to have a url like httpsecure://rsa:PUBLICKEY/example.com/ to make sure example.com always responds using the key PUBLICKEY. IIRC, the Tor onion services is an instance of this -- the .onion domains include public keys.
> Am I safe? No. Because any entity with access to the private key of any of the certificate authorities trusted by my machine, is capable of intercepting the communication between my machine and my server, simply by MITM.
You're probably more safe than you'd think. Certificate Transparency is now required for Chrome, Firefox, or Safari or you'll get an error message during the TLS connection, before any private data is sent to the (potentially MITM'd) site.
Given that all certificates are logged, site operators can use some of the many CT alert websites to let them know if and when a new certificate is issued for their domain, so if some random authority they haven't heard of before issues a cert or it's done at a time they know they didn't need to renew their certs, it'd be time to raise major alarms about the occurrence and thus would mean instant loss of all business for that authority; plus, shockwaves would be sent across the internet as this would be a huge event, especially if it's against a company worth burning a CT for (eg. Google which houses so many fortune 500
companies' secrets).
> the .onion domains include public keys.
The .onion domain is, in itself, a public key. The side effects of your proposed solution are:
A) it would mean you HAVE to trust whoever sent you a link
A) 1) for web-based referrals, this would mean you trust your (possibly state-sponsored) search engine to never MITM you (this is currently mitigated by CT which would expose Google's GTS issuing a random domain's cert)
A) 2) for IRL events, this would mean you have to trust that the business themselves put up a certain QR code with the public key and not some malicious actor
B) This would mean site.com could never rotate their private key without changing all of their backlinks to one with the correct public key.
These are all problems Tor already faces - you have no idea if the onion site you're linked to is actually the site it says it is if it perfectly mimics it and/or reverse proxies the real site. You're currently always advised to get URLs from a trusted source once then only use bookmarks to access them to prevent reverse engineering. And you can't rotate your private key without doing this domain change.
Can't you pin the cert (http pinning)? Can we pin a CA cert of one CA? Can we add to dns which CA is allowed to issue certs for this domain like which ip is allowed to send emails?
Web browsers no longer offer key pinning because it's both a foot gun and subject to ransom attacks.
So what that means if, maybe the person in charge of the web site "cleverly" enables key pinning, then loses the keys, you fire them for incompetence but too bad your site is now unreachable for a long period, hope it wasn't important. But worse, maybe everybody you employ is smart or careful or both, but unfortunately bad guys break in, and they set up key pinning, then deliberately remove the keys. Now your site is unreachable... unless you pay them a ransom for the keys.
For non-browsers (e.g. a phone app) pinning is still very much possible, and I would say judging from what we see on community.letsencrypt.org that it does indeed function as a footgun - e.g. we had an outfit that does industrial IoT stuff and their things all believed they needed to see certificates from Let's Encrypt X3, which is a shame because X3 was retired in favour of R3 and so those things just broke until a human could reach them to perform a manual firmware update.
A DNS record to indicate which CAs may issue for a DNS name exists today, it's called CAA, and you are welcome to go set it up. However, CAA is about preventing a different issue than the one your parent was ranting about. CAA tells a trustworthy CA that you don't want them issuing, for example because their processes aren't suitable. But it does not prevent them from doing so, it would be a misissuance (policy forbids, they did anyway, that's a policy violation), but it wouldn't be impossible and is deliberately not detected by software like web browsers.
Let me give two examples, one where CAA fixes it and one where it's not applicable at all
1. [Yes this really happened] Facebook has a deal with a CA where Facebook pays them money and they have a bespoke process to issue certificates which includes ensuring the Facebook security team is happy. However, Facebook did not set CAA, and so when a contractor who didn't know any better just created a new web server something.something.fb.com and asked Let's Encrypt for a certificate, they got one. Facebook freaked out. Setting CAA would have prevented this, Let's Encrypt would say "Cannot issue, prohibited by CAA for fb.com" and the contractor asks his contact at FB, who then checks with security first and they either say "No" or get a certificate from their preferred CA. Today Facebook sets CAA.
2. Someone buys a domain example.com, and they're annoyed that the previous owner has a valid certificate for example.com which is still valid, from Entrust. So, they blacklist Entrust in CAA. This has no effect on that existing certificate, it only means the new owner can't get new certificates for that name from Entrust. The correct fix they should have done was to show Entrust that they, as the new owner, want this certificate revoked, in most cases that's just a matter of sending an email and doing what the reply says although the details vary by CA.
Although I do not deny the necessity for certificate authorities for convenience, I do not understand why using CAs is the only option. Why does the TLS protocol not allow for using a key pair which is agreed-upon between a server and a client beforehand, like in SSH connections where a public key to be used in a connection is placed in a server prior to the connection?
There are many CAs out there and in the event of China or Russia hacking into one of them, it would enable them to perform man-in-the-middle attacks. I'd like to eliminate such a possibility, but the TLS protocol requires me to trust a certificate authority. I might just be a conspiracy theorist, but I am suspecting why it's impossible to use TLS without trusting a third-party called certifcate authority is exactly because someone needed to leave a way to do MITM attacks.
I don't think TLS cares why the certificate is trusted, validating certificates is higher on the stack, and you've probably seen the browser UX for accepting an invalid certificate. I know I've added a non-CA cert to my trusted list (on Windows) and gotten the result you are describing.
Firefox is the only browser that ships with a built-in trust store. Other browsers use your operating system’s trust store and you can add & remove trusted CA certs from that trust store using OS-specific utilities.
The big three root store programs are run by Apple, Microsoft, and Mozilla/NSS. Most (all?) Linux distros are based on the Mozilla store. Until recently, Google also used Mozilla’s store for things like ChromeOS and Android. They just recently announced that they’re gonna start running their own program though.
FWIW the browsers seem to do a pretty good job of policing CAs. Probably better than most end-users would do.
Browsers do a crappy job in general for any CA usecase that isn't https on public websites. Guidelines for the WebPKI are, of course, very web-centric. E.g. short lifetimes may be acceptable there. Automated frequent reissuance may be an option. But if you e.g. look at email and auth certs, possibly stored on a smartcard, things are quite different. Lifetime should be long and can be long, because the key is used rarely (compared to webservers) and maybe stored in dedicated smartcards. Verification is another thing thats totally different for an email address or a person's name.
I’m on iOS and don’t have an Android or ChromeOS device handy. I don’t see a way to remove a cert from Apple’s iOS trust store (settings just tells me what store version I’m running). There may be a way to do it using mobile device management (MDM) tools.
On Android (Galaxy Note9 in my case), you can disable any CA certificate from being used. There's a "view security certificates" screen with the ability to disable each individually.
It would be nice to use a hybrid: (only) Trust CA on first use. But I guess in practice some random company is much more likely to misplace their keys than you are to be mitmed by CA's.
Also, the Web PKI model has no real granular authorization when it comes to which CA can issue for which domain. A trusted CA can issue for any domain. So if you TOFU in my CA to connect to my website you’re also allowing me to issue for google.com.
Obviously this is all addressable in theory, but now you’d need some kinda policy system baked in pretty much everywhere.
Your website hands me a cert. I have never seen it before so I make sure CA says it's legit. From then on I keep using that same cert to connect to you, and CA no longer matters.
I haven't checked/verified recently but from your comment I'm guessing that the major browsers still don't support (i.e., enforce) the Name Constraints extension?
There are CAA records in DNS, but those are far too weak. The CAs are supposed to check them at issue-time. To be useful, the clients would have to check them at acceptance-time.
That wouldn't quite work the way you think it would...
The CAA record is useful only at the time a certificate is issued (signed) by a CA.
A client has no way to know what the CAA record was at the time the certificate was issued -- a browser cannot ("at acceptance-time") use the current value of the CAA record to determine whether a certificate was properly issued or not.
You don't have to use a third party, you just have to specify an authority.
If you say the self-signed certificate is the authority, that's no problem.
In my private lab environment I have one certificate (for everything).
The certificate is installed on all servers, and even the clients use the same certificate to authenticate, and everyone trusts the cert as an authority.
You'd never run a production setup like this, but there is nothing stopping you.
Because the SSH trust model relies on verifying the fingerprint out of band, which isn't practical with a website, and even if it was non-technical users and even most technical users wouldn't do it. Certificate Transparency is a good mitigation for the risk of a CA being compromised
TLS-SRP allows using an agreed to password alone or in conjunction with a certificate. Gnutls, openssl, curl and a few other libraries contain implementations. It just has not found widespread use and none of the major browsers support it.
TLS does have a mode that uses a pre-shared key, but in TLSv1.3 I believe it's only used for session resumption.
Edit: Also, I recently learned about DNS Certification Authority Authorization (CAA) records. You can specify which of the public certificate authorities a browser is allowed to respect, for your domain. I don't think it's verified by all browsers yet, but it's a step.
Other people responded to your CAA mistake (CAA absolutely shouldn't be enforced by anybody else except a CA, even researchers monitoring it for other sites is dubious, though as a research project it isn't inherently dangerous like enforcement would be)
But let's talk about PSKs (pre-shared keys).
TLS 1.3 itself doesn't care why this PSK is used. It's true that today your Web Browser will only use it for resumption, because it offers a significant speed-up in some scenarios on second and subsequent visits.
But for IoT applications it is envisioned that a group of devices might share one or more PSKs out of band. Maybe they're a factory setup thing, maybe the devices are to be set up in close physical proximity using low bandwidth Bluetooth, then they'll build themselves a WiFi network when deployed using PSKs to secure TLS.
Browsers could do that, but all the vendors are clear there's no way they'd actually want to do that. What would the UX look like? "Please enter the hexadecimal PSK for the site in this box?". So today they only use PSKs for resumption.
The reason this one feature (PSKs) has two very different purposes is to narrow the security proof work. Mathematicians worked really hard to prove some important things about TLS 1.3 and the more extra different features it has, the less focus can be put on any particular feature.
Even as it is they missed the fact that PSKs are symmetric. If Alice and Bob have a single PSK to "authenticate" them and are both capable of serving, then Mallory can trick Alice into thinking she's talking to Bob when she's actually talking to herself. It's a small problem, but the proofs didn't cover it, and so it was not spelled out in the TLS 1.3 document that you should worry about this.
Pretty sure CAA is supposed to be enforced by CAs, not by browsers. So, for instance, Let's Encrypt should refuse to issue a cert for your domain if you have CAA setup for digicert.
CAA records (which specify which CAs are allowed to issue certificates for a given domain) are intended to be enforced by CAs, not by browsers. This prevents unintentional misissuance of a certificate, but not deliberate MITM attempts if the CA is actively involved.
Their counterpart for browsers are TLSA records, which associate specific keys or certificates with a domain name. This is the part that actually prevents MITM attacks on the client side (assuming the client's getting a complete and accurate DNS response, which is a whole other issue), since it'll cause a compliant client to reject any other keys or certificates. (No idea how widespread the implementation of this is on the client side, though.)
I’ll also add that certificate transparency (CT) is another mechanism designed to mitigate malicious cert issuance by a CA. A CT log is an public, append-only data structure. It doesn’t actively prevent anything, but it does ensure that a malicious issuance is easily detectable. In practice it seems to be a pretty effective deterrent against nation-state attacks: they won’t go undetected for long.
Skimming the comments here, I was surprised not to see an equivalent piece of code mentioned. To me my code is more readable than both of the first and second examples presented in the article. Does that mean my taste is peculiar?
I kinda like this version best. It makes clear we're updating a ->head pointer in one case and ->next in the others. I like the elegant version, too, since it can update both kinds of pointers in one fell swoop, but you have to grok that p starts as a head pointer and later becomes something's next pointer.
I'd say C syntax for double pointers is a lot less kind than the syntax for single pointers. Your version thankfully lacks any line like so, without making me think about parens
There are two cases here (1. when the target is at the front of the list and necessitates changing the head of list, and 2. when the head doesn't need to be changed.) You handle both the cases separately.
Both the classical and the "elegant" versions are worse than this one.
For any given P2P system which can be used to share a large amount of files, we can prove the system cannot exist with this question: Is there a way to prevent classified documents of the US stolen by China, or child pornography from being shared using the system?
If the answer is yes: I'm afraid to tell you that the system is not peer-to-peer in the sense that there is a central authority to censor what content is to be shared. Therefore projects like youtube-dl can be easily erased.
If the answer is no: Such a system with no possibility of censorship is too dangerous. What if there is literally no way to prevent information that threatens the security of the US from being shared? Fortunately as of now, no such a system exists on the planet. Maybe you now have a vague idea why such a system does not exist.
I am under the same impression and have some circumstantial evidence supporting it as someone who's been inspecting the code of the Zoom client (just for fun): their Windows client uses a 3rd party library that is used virtually only in China whose documentation is available only in Mandarin.
On a tangential note, I got surprised to see no traces of attempts to make inspection harder on their client software. Even function names remain intact in some cases, which I assume would not happen if they had a malicious intent like embedding a backdoor.
> We need to promote our own opensource and free tools to our friends and family, we will get the last laugh.
Although I agree that there should be viable alternatives to available tools for online communication without the possibility of being eavesdropped, I can see why such things do not exist. It'd be too inconvenient for the law enforcement. And if you take things from the perspective of whether that thing makes the job of the law enforcement harder, you'd notice such things tend not to exist. As a principle, popular software should not have a means to prevent data going through the software from getting inspected by the law enforcement. Does Dropbox offer end-to-end encryption? Of course not. Is there a popular easy-to-use disk encryption software? There was TrueCrypt, which is gone for an obvious reason. Does Gmail implement end-to-end encryption for emails? Of course not...
> Even function names remain intact in some cases, which I assume would not happen if they had a malicious intent like embedding a backdoor.
For what it's worth, this is a bad assumption.
Someone hiding bad behavior from a reverse engineer wants it to be in friendlyMisnamedFunction, not in lkjwer23_aic. If you remove all the English semantics from the binary, a reverser is free to focus on the behavior; if you don't, you can lure them into a false sense of security.
I'm under this impression because the UI is full of grammatical errors that aren't the types that native English speakers make. I would be surprised if there's anyone in the US working on the Windows app.
I sometimes imagine how wonderful it'd be if there were a viable alternative to the Web that is decentralized and infeasible to censor, only to realize that how great it is for the freedom, is the exact reason why it doesn't exist. It's just too inconvenient for those in power. That might be why Tor was funded by the government. Its primary functionality is to strengthen the existing Web, rather than to be some competing force to the Web.
This project seems nice, but there's some detail I have trouble wrapping my head around: why is a template string represented as a lambda containing an f-string, instead of as a plain string? Namely, wouldn't the code below suffice?
Q("""select 1 from {otherQuery}""")
What does the following code enable that the code above cannot do?
It's impossible to hook into Python's string interpolation system to the degree required for the first to work. JS and Julia can do it, e.g. in JS (with typescript annotations) it'd just be a matter of defining
function Q(stringBits: string[]): Query {
for (bit in stringBits) {
if (bit is string) {
add to sql
} else if (bit is Query) {
add bit to dependencies
add bit.name to sql
}
}
}
But this cannot be done currently in Python (see PEP-501), so I'm forcing the user to pass a lambda, which I can get the AST of, with which I can implement the machinery to do the above.
I'm pretty confident f-strings use str.format under the hood, so instead of AST mambo-jumbo, you can do just query.format(var1="something") or something.