Hacker News new | past | comments | ask | show | jobs | submit login

What?

Let's go back to username and password. 2FA forces scammers to up their game.

What about password managers? Having separate passwords to every account makes hacking into your accounts much harder and might hurt everyone in the long run.

And don't get me started on end to end encryption. Privacy, long term, will mean the fall of civilization.

Sarcasm aside. I think I understand your point in which we shouldn't just delegate to cloud providers the whole effort in preventing attacks, but just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution. Because it doesn't end with this type of mitigation and dependency. A similar argument could be made for not using proprietary chip designs made by cloud providers. Or any proprietary API solution for that matter. It really is a matter of convenience that a community solution might cover in the future, abstracting away fundamental building blocks every cloud provider must have (name resolution, network, storage and computing services) to provide such higher level functions without lock in. We are just not there yet.




> just with everything production-grade, the average enterprise just isn't ready to deal with all the upfront cost to run your entire computing solution

That’s not a fair point.

We’re not even trying to make the internet safe. There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago. Or how a mere few days of poking around smartly is still enough to this day to find enough open DNS resolvers to launch >500Gbps attacks with one or two computers.

Why are these threats allowed to still exist?

The only ones attempting something are governments shutting down booters (DDoS-as-a-service platforms). But that’s treating symptoms, not causes.

We will eventually need to do something, or it will be impossible to run a website that can’t be kicked down for free by the next bored skid.

Just like paying protection fees to the mafia was a status quo, this also is just that. A status quo, not an inevitability.

The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.


I don't think it's true that 0 actions are being taken. When new vectors for amplification attacks are found, they get patched - you can't do NTP amplification attacks on modern NTP servers anymore, for example. But it takes a long time for the entire world to upgrade and just a handful of open vulnerable servers to launch attacks. And in the meantime people are always looking for new amplification vectors.

> The solution is to finally hold accountable attack origins (ISPs, mostly), so that monitoring their egress becomes something they have an incentive to do.

Be careful what you wish for. The sort of centralized C&C infrastructure and "list of bad actors everybody has to de-peer" that you would need to this effectively would we a wonderful juicy target for governments to go, "hey, add [this site we don't like] to the list, or go to prison".


> "hey, add [this site we don't like] to the list, or go to prison".

Aren't there already a dozen or so such lists? I don't see how one more list really increases the risk.

You can make the list public - most of the bad actors are obsolete, compromised equipment for which the owner is unaware of the problem. Once the list is public, it's pretty easy to detect anyone trying to abuse the list as a tool of censorship.


IP reputation is already a thing. And plenty enough ASNs are well-known for willfully hosting C2 servers and spam, DoS, etc sources…


Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf. This can cover hundreds of unrelated ISPs as the "origin" and is effectively indistinguishable from organic traffic to a popular destination. "Accountability" is not simple here.


> Traditionally, a botnet can be compromised (at least largely) of actual consumer devices unknowingly making requests on their owners' behalf.

And I do count that in.

Just because a user is the source of an attack unknowingly doesn’t make it right.

What would make it right is for there to be a more generalized remote blackholing system in place.

ie my site runs on an IP, is able to tell my ISP to reject traffic to it from $sources, and my ISP can send that request to the source ISP.

And if it makes my site unavailable to that other ISP because of CGNAT and 0 oversight, tough luck. Guess their support is getting calls so maybe they start monitoring obviously abusive egress spikes per-destination.


I like the irony of saying there are zero actions being taken in response to a blog post documenting actions taken to specific CVEs.


These blogposts document the attack. Documenting it and acting in it are different.

There’s no practical action being taken besides « use our profucts cause we can tank it for you » here.

The mitigations listed are better than nothing, but the fact that every skid out there can hire a botnet of a few thousands compromised machines (like here) and send you a few millions (say this protocol attack allowed a 100x higer than avg impact) rps is way enough to kill the infra of 99.99% websites. No questions asked.


>There is zero (0) actions being taken to stop this madness. If you run a large website, you still regularly see attacks from routers compromised 3, 4, 5 years ago

Yes, you're 100% correct. Back in the day when the main bot net activity was spam if you were infected and you started sending TB of spam the ISP would first block your outgoing smtp. If they kept getting complaints in a week or two they'd cut you off.

I remember 30 years ago when most people were on dialup, I was fortunate enough to have 128kB SDSL. As a relatively clueless kid I decided to portscan an IP range belonging to a mobile service company. Few days later my dad got a phone call saying their IDS flagged it and "don't do it or we'll cancel your service". For a port scan of few public IPs no less!

ISPs could definitely put a stop to 99% of these botnets, but until they see some ROI, why would they bother?


But that's exactly the problem, it shouldn't require a enterprise grade tool just to host a simple website on the internet. We've lost something due to our inability to stop attacks at the source and heavy overreliance on massive cloud providers to do it for us.

2FA and password managers didn't make us heavily reliant on massive companies.


Yes, but if these cloud providers didn't exist eventually there'd be botnets that no site could protect against, rather than the status quo of at least some sites being able to resist them. The idea that the existence of cloud providers that can soak up a lot of traffic is making things worse by causing botnets to get more powerful just seems silly.


you don't need enterprise grade tools just to host a simple website. however, if your simple site ever gains enough attraction to come under an attack, especially like this, you'll never survive. you can either just accept that your service will not survive the attack and just shut it down until the attackers realize mission accomplished and stops. you can then hope they don't notice when you bring it back. no simple site will be able to afford what's required to stay up from these attacks.

i'm not saying i like having to put the majority behind the services of 2 or 3 companies, but if you ever get shut down from some DDOS, you'll understand why people think they need to.


It won’t survive - until a day or so later you’ve migrated to one of the large providers who provide the protection.


A similar analogy can be made with the likes of westward expansion in the continental US.

Back then, you got a piece of land, and really could do what you wanted with it. Build a business, farm, etc. some government taxes but nothing crazy. But you had to deal with criminals, lack of access to medical care, and lack of education.

Now to do the same, you have a slew of building codes, regulations, zoning laws, and are basically forced to have municipal services. Higher Taxes to pay the roads, police force, fire fighters, education services etc.

However, home owners can still just have an egg or vegetable stand at the end of their driveway. It won’t be the same as having a storefront in town, but it’s still doable without the overhead.

Similarly, as the internet matures, we’re going to see more and more overhead to sustain a “basic” business.

But you can still have a personal blog ran in your closet, for lower-level traffic.

The analogy isn’t perfect, but unfortunately as threat-actor’s budgets increase, so too do their quality/sophistication of their attacks. If it was cheap to defend against some of the more costly attacks, they would find a different vector.

The answer, to me, is some tangential technology that is some mix of federated or decentralization. Not in a crypto bro sense, but just some tech whose fundamental design solves the inherit problem with how our web is built today.

Then threat actors will find another way, rinse and repeat…


> home owners can still just have an egg or vegetable stand at the end of their driveway

No you can't. That is illegal without a "cottage food" license, training, and labeling in most of the US.

https://www.pickyourown.org/CottageFoodLawsByState.htm


Child-run lemonade stands are technically illegal in most states (some have actually carved out exemptions for them because of overzealous policing).

Garage sales often have a specific carve out, also, and limitations on numbers of time per year, etc.

Most areas nobody cares at all until it becomes a nuisance somehow.


Selectively enforced laws are the worst kind of law.


I've always thought it would be interesting to allow as a defense against a violation of a law to prove that the law is regularly violated without consequence.

Because selectively enforced laws are just another way of saying you have a king at some level, the person who decides to enforce or not.


Selective prosecution is a defense under the Equal Protection clause of the Constitution.

However, the Supreme Court has left the prescribed remedy intentionally vague since 1996, which in turn makes the claims themselves less likely to be raised, and less likely to succeed.

https://wlr.law.wisc.edu/wp-content/uploads/sites/1263/2022/...


You have some control over this as an ordinary citizen. Next time you're on a jury for a lemonade stand violation, nullify.


Has a lemonade stand violation ever resulted in a jury trial in the US? I'm skeptical. In places that enforce those rules, usually what happens is that the cops tell the parent it isn't allowed, the kid shuts it down and there's no fine.


Or it turns into a giant PR disaster for the cops.


Don't tell that to the GDPR defenders.


I am a gdpr defender. I would like stricter enforcement.


Okay but does that mean anything regarding the parent commentor's analogy or the article?


20 years ago if a blog or website ended up on slashdot/digg/whatever there was a good chance it was going down. Scalable websites are a commodity today


That goes both ways. What was the price then to get a botnet with 10k nodes making 1k requests / second? What is the price today?


For the website or for the use of the botnet?


For the use of the botnet...


sure, It's no doubt an arms race. The prevalance of websites going down due to scaling issues feels like order of magnitude less than it was 20 years ago though. Purely anecdotal with no real data to back that up.


Because the majority of sites run on/behind:

- AWS

- Cloudflare

- Azure

- GCP

- Great Firewall of China

Maybe there was some truth about "the world market for maybe five computers", after all...


Sure, I fail to see how that invalidates my point


What I am saying is that we are getting "scalable websites" today individually, but it has cost us overall resiliency because most of us all are hiding behind the big providers. I am not so sure if this is a good trade-off.


> 2FA and password managers didn't make us heavily reliant on massive companies.

Retool: https://arstechnica.com/security/2023/09/how-google-authenti...

Lastpass: https://news.ycombinator.com/item?id=34516275


If Google Authenticator goes away, people will still be able to use 2FA (I for one use Aegis, it's available on F-droid and does everything I need, including encrypted backups)

If Lastpass goes away, people will still be able to use keepass or any of the large number of open source password managers, some of them even with browser integrations.

If I have a website that is frequently attacked by botnets and Cloudflare goes away, what can I use to replace it?


I am sorry, but if your password manager goes away and you have no disaster recovery scenario planned you might not be able to just move to a competitor:

https://news.ycombinator.com/item?id=31652650

My response was to illustrate how insidious big companies are.

Of course nothing compares to the backbone of the web going down. If AWS North Virginia suffers widespread downtime to all its availability zones, much of the web will just go dark, no question about it.


2FA, I’m not sure.

But Lastpass doesn’t represent the whole of password managers. Storing your passwords in an online service is a really silly thing to do (for passwords that matter at least). Use something local like keepass.


Hope you plan ahead for a house fire with a 3-2-1 approach for backups. Maintaining an always on off-site storage is expensive unless you resort to cloud solutions like OneDrive or Dropbox, but then you go back to the problem of having your passwords on the cloud, even if encrypted.

Not using cloud is just very expensive and time consuming for the average user.


Passwords are small enough that you can make physical backups easily.


Honest question, because it is interesting and might change how I approach backing up my passwords. How would you go about maintaing that physical copy updated?

What I think would make this approach hard is that you would have to ponder if a newly created account is important at creation time in order to know if you should update the off-site, physical copy of your most important passwords (I say this because if you want to backup everything and avoid the cloud entirely it is just not viable, having to update this physical backup for each new account. I am currently at over 400 logins in my pw manager, 2 years ago it was half as much).

I think having your passwords encrypted with a high enough entropy master password and a quantum-resistant encryption algorithm, and having an off-site, physical backup of your cloud account credentials is enough for anyone not publicly exposed, like a politician or someone extremely wealthy, even though I would be skeptical these people go through such lengths to protect their online accounts.


The lesson is not to "avoid" the cloud, but to not be "dependent" on it. Doubly so if the service provided is one that keeps you locked in and can not be ported over.

So yes, I feel comfortable with my strategy of having backups on bluray disks + S3. If AWS goes down or decides to jack up their prices to something unacceptable, I will take the physical copies and move then to the dozen others S3-compatible alternatives. I am not dependent on AWS.

But I am not interested in using Google Authenticator or Lastpass because that would mean that I am at their mercy.


LastPass is an issue - but even LastPass would let you export/print the passwords. So no hard dependency there*. Google Authenticator recently did something similar with QR codes.

* though OTP seeds don’t print, and you can’t export/print attachments. I don’t recommend LastPass for these and many other reasons.


With two usb sticks it’s not that much work to take one witha fresh backup to my mom when I visit and take the other one back and update that backup. At worst I lose one or two logins.


It doesn’t take enterprise grade tools to host a website.

It does take enterprise grade tools to defend against the largest DDoS ever attempted.

Those are not the same thing. And those DDoS’s often are aimed at things besides a HTTPS endpoint.


There should be a protocol to block traffic on the upstream provider. So if someone from 1.2.3.4 sends lots of traffic at you, you send a special packet to 1.2.3.4 and all upstream providers (including the provider that serves 1.2.0.0/16), that see this packet block traffic from that IP address directed at you. Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

But ISPs do not want to adopt such protocol.


So I can deny service to your site with a single packet, instead of having to bother with establishing a whole botnet? The current botnet customers would be the first to advocate for this new protocol!


Simple! To prevent it being abused easily you could make it so you would need to send a high number of those packets for a sustained period in order to activate the block.


There is already an RFC we could apply, just implement forced RFC3514 compliance and filter any packets with the evil bit set.

https://datatracker.ietf.org/doc/html/rfc3514


And there could be a short time limit on that block, perhaps one hour, but even 60 seconds would be enough to completely flip the script on a DDoS.


You can only block access to your IP address, so you can ban someone from sending packets to you but not to anyone else. My proposal is well-thought and doesn't require any lists like Spamhaus that have vague policies for inclusion and charge money for removing. My proposal doesn't have any potential for misuse.


Sorry, this is not well-thought and certainly has potential for abuse. This is on IP and not domain? What is the signing authority and cryptography mechanism preventing a spoofed request?


When you send a "reject" packet, the imtermediate routers send back a confirmation code. You must send this code back to them to confirm that "reject" packet comes from your IP address. No cryptography or signing required.


I don't think you understand how networking operates at a packet level.


How can it protect from... botnets, where there are tens of thousands "someones"?


You can only ban packets coming to your IP. Botnet can only ban packets coming to its IP addresses.


It's not very hard to send packets with a fake source IP, especially if you don't care about the reply.


Seems easy enough to require (i.e. regulate) end-customer ISPs to drop any traffic with a source IP that isn't assigned to the modem it's coming from. This would at least prevent spoofing from e.g. compromised residential IoT devices. Are they not already doing that filtering? Is there any legitimate use-case to allow that kind of traffic?


Someone has to go and add the filtering. Nowadays (or maybe since ten years ago) most ISPs have the filter, but not the last 1% (or maybe 0.01%).


The routers can send back a confirmation token to confirm the origin address.


First of all, there is no way this works reliably for anything but the first hop. There is no way for a router to send a packet to you in a way where you can reply to that router unless you are connected directly to it, unless all ISP routers start being assigned public IP addresses. Additionally, there are normally many paths between you and your attacker, and there is no guarantee that packets you send will take the same path as the packets you were receiving. Especially as the routing rules get modified by your successful blocking requests.

That also means that every router now has to maintain a connection table to keep track of all of the pending confirmations, and to periodically check that table for expirations so it can clean it up. Maybe not that bad for a local router, but this is completely unworkable for routers handling larger parts of the internet.

And of course, anyone who has a tap into that level can trivially spoof all of the correct replies so it's still not a secure mechanism.


You can deny access only from your IP, not for anyone else.


IP addresses can be spoofed. So you’d need some kind of handshake to verify you are the owner of that IP. Which is going to be tough to complete if your network is completely saturated from the DDoS in progress.

I do think your idea has merit though. But it’s still a long way from being a well thought-out solution.


How do you verify the source address of the packet is legit?


The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.


> The router can send back a confirmation code and you must send it back to confirm that request comes from your IP.

Ideally with the token packet being larger than the initial packet, so it can easily be abused for a reflection attack... ;-)

> Also, on a well-behaved networks that do not allow spoofing IP addresses, this check can be omitted.

This is already not true for most networks, and in your case would've to be true for all intermediate networks which is just impossible.

In another post you suggest this should also allow blocking entire networks; how do you prevent abuse of that?

Your suggestion is anything but well-thought, it's a pipe dream for a perfect world, but if we'd live in one, we wouldn't have ddos attacks in the first place.


Yeah, we should invent secure communication channels and crypto keys first...


This proposal only works if the packets are readable by every intermediate router. Or are you suggesting that you establish a TLS session with every router between you and the attacker?


What you say already exists, hell, you can use BGP to distribute ACLs

But it costs space in the routing tables and that means replacing routers earlier. It's no wonder, especially if you multiply it by thousand customers.

"block all traffic from outside from this IP" is significantly easier than "block all traffic from outside from this IP to this client". And you need to do it per ISP client, else it is ripe for abuse.

And don't forget a lot of the traffic will come from "cloud" itself.


> What you say already exists, hell, you can use BGP to distribute ACLs

But you should own an AS for that?

> But it costs space in the routing tables

Not implementing my proposal leaves critical infrastructure unprotected from foreign attacks. Make larger routing tables. Also, instead of blocking single IPs one can block /8 or /16 subnets.


Make larger routing tables.

Brilliant! Why didn’t we think of that?!? MOARE TCAMS!!!


if Cloudflare can do this on commodity hardware (stop attacks and block thousands of IPs), then router manufacturers who have custom hardware can do much more.

Also, in Russia for example, there is DPI inspection and recording of all Internet traffic and if it is possible in Russia, then West can probably do 10x more. Simply adding a blacklist on routers seems like an easy task compared to DPI inspection.


This can be made on a paid basis. For example, for $1/month a customer gets a right to insert 1000 records (block up to 1000 networks or IPs) into blacklist on all Tier-1 ISPs. For $100/mo you can withstand an attack from 100 000 IPs which is more than enough and Cloudflare goes bankrupt.


I just imagined this: isp's could make a isp.com?target=yourwebsite.org/fromisp [slow] redirecting url. If you receive unusual amounts of requests from the isp you redirect it though their website.

They can then ignore it until their server melts (which takes care of the problem) or take honorable action if one of their customers is compromised. The S stands for service after all.


It appears you don’t understand DDoS at all. There aren’t humans sitting behind browsers or scripts using browser automation software. No one cares about less respects your “redirect” because no one’s reading your response. Most of the time the attacks aren’t even HTTP, they are just packet floods.


> It appears you don’t understand DDoS at all.

I can confirm this. I see web pages talking about redirecting traffic to scrubbing centers.


> Of course, the packet should allow blocking not only a single address, but a whole network, for example, 1.2.3.4/16.

So, if my neighbour is infected and one of his devices is part of a botnet, I get blocked as well?


Yes. Because blocking several extra users on a bad network that has several infected hosts and does nothing about it is better than being under attack.


Block the whole country, then I guess you’ll see laws passed that IOT providers need to start updating at a better clip.


That already effectively happens in a lot of cases.


If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved. Maybe one day BCP38 will be fully deployed and that will work. I also dream of a day where chargen servers are only a memory. Some newer protocols are designed to limit the potential of reflected responses.

Null routing is available in some situations, but of course it's not very specific: hey upstreams (and maybe their upstreams), drop all packets to my specific IP. My understanding is null routing is often done via BGP, so all the things (nice and not) that come with that.

Asking for deeper packet inspection than looking at the destination is asking for router ASICs to change their programing; it's unlikely to happen. Anyway, the distributed nature of DDoS means you'd need hundreds of thousands of rules, and nobody will be willing to add that.

Null routing is effective, but of course it takes you IP offline. Often real traffic can be encouraged to move faster than attack traffic. Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.


> If the source field in a packet reliably indicated the source of the packet and a given IP was sending you a lot of unwanted traffic, you'd ask their ISP to turn them off and the problem would be solved

No. Your email will go straight into trash because ISP is not interested in doing something for people who don't pay them money. Also, even if they cooperate, it will take too much time.

> Null routing is available

Null routing means complying with criminals' demand (they want the site to become inaccessible).

> it's unlikely to happen

It will very likely happen if there will be a serious attack on Western infrastructure: for example, if there will be no electricity in a large city for several days, of if hospitals across the country won't work or something like this. Then the measures will be taken. Of course, while the victims are small non-critical businesses, nobody will care.

> Otherwise, the only solution is to have more input bandwidth than the attack and suck it up. Content networks are in a great position here, because they deliver a lot of traffic over symetric connections, they have a lot of spare inbound capacity.

So until my proposal is implemented the only solution is to pay protection money to unnecessary middlemen like Cloudflare.


Do you know what the first D in DDoS attack stands for?


I am pretty sure that protocol would be just as abused.


How exactly? You can authenticate sender by sending a special confirmation token back.


How does one get removed from the block list?

Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

I'd imagine that the website owners that want the attack stopped will soon want to figure out how to get traffic back since they need users to pay the bills.

Whats to stop someone from just making an app that participates in an attack when connected to public(ish) wifi networks and participating in attacks long enough to get those all shut off from major sites?

How does this stop entire ISPs from getting shut off when the attackers have managed to cycle through all the IP pools used for natting connections? (e.g. the Comcasts of the world that use cg-nat to multiplex very large numbers of people to very small numbers of IPs)?


> How does one get removed from the block list?

We can add an "accept" packet that lifts the ban.

Also, how do you remove yourself from blacklist when banned by Google or Cloudflare? I guess here you use the same method.

> Say some IoT device that half of households own gets compromised and turned into a giant botnet. The news gets out and everyone throws away that device. Now they are still blocked over a threat that doesn't exist anymore... doesn't seem like a good situation for anyone.

Not my problem. Should have thought twice before buying a vulnerable device and helping criminals. As a solution they can buy a new IP address from their ISP.


As much as I half-wish there was something like this, it does sound like email spam blacklists all over again.


Yes, what the OP is saying is related to one of the paradoxes of security/defence, i.e. the fact that the more one increases its defences (like Google is doing) then the more said increase of defences also pushes one's adversary to increase its offence capabilities. Which is to say that Google playing it safer and safer actually causes their potential adversaries to become stronger and stronger.

You can see those paradoxes at play throughout the corporate world and especially when it comes to actual combat/war (to which actual combat/war these DOSes might actually be connected). For example the fact that Israel was relatively successful in implementing its Iron Dome shield only incentivised their adversaries to get hold of even more rockets, so that the sheer number of rockets alone would be able to overwhelm said Iron Dome. That's how Hamas got to firing ~4,000 rockets in one single day recently, that number was out of their league several years ago when Iron Dome was not yet functional.


It's the opposite, the number of rocket was growing and hence the Iron Dome was developed. The Israelis saw the writing on the wall and acted accordingly. The laser system will be operational soon and then it will cost 1$ per shot.


Unless it's cloudy outside.


> Let's go back to username and password. 2FA forces scammers to up their game.

Let's do it. It works for the website you're using right now. 2FA was in large part motivated by limiting bot accounts and getting customers phone number.

I can't imagine how much productivity the economy loses every day due to 2FA.


Is this sarcasm? If not please provide some more details on why you think "2FA was in large part motivated by limiting bot accounts and getting customers phone number". I never used a phone number for 2fa. Mostly TOTP. Bots could do that too. I don't see the connection.

>I can't imagine how much productivity the economy loses every day due to 2FA.

Is it really that much? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time. Writing this comment took me as much time as using 2fa for a handful of services for a month.


> ? Every few days I have to enter a 6 digit number I generate on a device I have with me all the time.

I use more than one service a day, and some infrequently, so for me about every day I have a minute or two where I try to login, need to find my phone (it's not predictable when it will ask), and then type it in. This happens to every person several times a day!

I also now must carry a smart phone with me to participate in society.

But the main drag is that when people lose or break their phones the response is: "just don't do that" and the consequences range from losing your account to calling customer service.

> Mostly TOTP. Bots could do that too. I don't see the connection.

Most people using 2FA do not use TOTP, they use a phone number.

Bots could use TOTP, it's more infrastructure, and it's a proof of work function for them to login.


While I don't take starcraft2wol's theory seriously, there are a bunch of services that have made phone numbers essentially mandatory. They claim this is to "protect your account".

You sign up for a Skype account or Twitter account and decline to give your phone number, instead choosing a different form of 2FA? In my experience your account will be blocked for 'suspicious activity' even if you have literally no activity.


And you still don't take my theory seriously :)


To add, password managers provide great coverage of almost every problem 2FA is. supposed to solve and it improves the workflow your grandma already know (writing passwords on a sheet). The only difference is Google doesn't get any money when you run a script on your own computer.


> It works for the website you're using right now

It doesn't, you can regularly see people getting their accounts stolen here. This wouldn't be possible (or at least this trivial) with any competent implementation of 2fa.


> Privacy, long term, will mean the fall of civilization.

I'm curious about your rationalization for this. Lack of privacy will also mean the fall of civilization. Civilization is just doomed to fail at one point or another. All things come to an end.


This was me being sarcastic. Of course we need privacy, not because we have things to hide, but because individuality can only flourish without constant surveillance.

Yes! All things come to an end and that is why some recent philosophers think that Plato was naive to think it could minimize or erradicate society rotting. This is where negative utilitarianism comes in, where the point of society is not to maximize happiness (and therefore prevent society from collapsing) but to minimize suffering (and therefore provide mechanisms to minimize damages from transitions between organization forms when society collapses). I have to refer you to Karl Popper's The Open Society for this, because needless to say this answer is very reductionist.


Ah I just missed the sarcasm. Yeah, and when the sole goal is to minimize suffering, tyranny is introduced.

"Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety."


This discussion is somewhat reminiscent of "Don't hex the water"..

https://www.youtube.com/watch?v=Fzhkwyoe5vI


None of your examples are valid, IMO.

Procuring and operating the infrastructure to mitigate this kind of attack costs many many thousands of dollars or requires becoming part of the Cloudflare/AWS/Google hive.

Joe Schmo can set up a TOTP server, run keepass/bitwarden and use letsencrypt for free (or another SSL provider for cheap).

The lament from parent is that running a simple blog reliably shouldn't require being inside Cloudflare's castle walls or building your own castle.

---

My personal observation is that simple websites should continue operating HTTP1!


That's not a valid comparison, since there are various effective decentralized 2FA methods available – unlike for DDoS protection.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: