tl;dr: Don't trust big scary corporations like Symantec to verify sites, trust your friendly local geek's network.
I think if you weren't exhausted by the sheer length of the post by the time you reach that proposal tucked at the very end, you might think to ask some critical questions. Like, what are the vulnerabilities and exploits of a peer-to-peer system? Would this not be open season on socially engineering average folks to trust the wrong peer? How vulnerable to attack are local geeks and university computer science departments? How are compromises noticed and handled by the average folks who trust a small local authority? How will the verification work be paid for, or will it be completely volunteer based, and how efficient will that be?
Moreover, what the author fundamentally misunderstands is the importance of usability in security. Web security isn't perfect but that's because more perfect security would make ecommerce annoyingly difficult. Then people start taking shortcuts or just ignore security completely, which is a worse outcome. It's not enough to point fingers at users and yell that they're doing it wrong; security architects have to take responsibility for security outcomes. A peer-to-peer system would be significantly more inconvenient for average folks to use correctly, if only because of figuring out who to trust in the first place.
Well, to be fair, the author raised the UI/UX question which could be a great way to overcome the bullshit "green padlock == safe" idea. Which it doesn't now post-Heartbeat, and never did.
A different UI might reveal the trust path more directly, so that if I navigate to my bank that path might be forced into view.
I, for one, would love it if my browser displayed the trusted path used to connect to my bank before loading any part of the page. The same goes for self-signed certs. Would I avoid HN if their cert was self-signed? Nope.
I think if you weren't exhausted by the sheer length of the post by the time you reach that proposal tucked at the very end, you might think to ask some critical questions. Like, what are the vulnerabilities and exploits of a peer-to-peer system? Would this not be open season on socially engineering average folks to trust the wrong peer? How vulnerable to attack are local geeks and university computer science departments? How are compromises noticed and handled by the average folks who trust a small local authority? How will the verification work be paid for, or will it be completely volunteer based, and how efficient will that be?
Moreover, what the author fundamentally misunderstands is the importance of usability in security. Web security isn't perfect but that's because more perfect security would make ecommerce annoyingly difficult. Then people start taking shortcuts or just ignore security completely, which is a worse outcome. It's not enough to point fingers at users and yell that they're doing it wrong; security architects have to take responsibility for security outcomes. A peer-to-peer system would be significantly more inconvenient for average folks to use correctly, if only because of figuring out who to trust in the first place.