Well, that's kinda what you get when Russian and China Telecom constantly claim networks that aren't theirs to begin with. Some form of cryptographic ownership verification is the logical next step to prevent these things from happening.
Though I'd prefer a decentralized/pinned approach when single point of failure (read as: CA) is the alternative.
I don't understand how RPKI prevents route hijacking. It just signs that a certain AS owns a certain prefix, right? How does that stop another network from pretending to be peered with my network, then announcing an indirect route, copying the signature from my valid announcement of the same prefix?
Do you know BGPKit [1]? I'm not sure what the state of the project is, but I remember vaguely them implementing ASPA and being involved in the RFC back then.
I love how it will only ever be one leaky abstraction after another (incompleteness theorem) with a Lindy value of a few years to realize that and have to hallucinate something new, but you all keep trying to secure what physics won’t allow us to.
You all should go touch grass and learn to roll with our human frailty and imperfection rather than drive yourselves mad bouncing off the walls of your language and mathematical primitives.
Just remember you’re one of billions and no one needs you specifically. Just enough people overall so that life isn’t so shit one would be better off dead themselves
Because they are not idiots. Were they idiots, they'd quickly lose to someone more clueful.
The idea to regulate only international peering looks best so far. It's much like (m)any other regulations that are only present on the border between countries, and are absent inside.
At the very least, limiting such regulations to international-only limits their blast radius.
With China specifically it could work, because they internally control their points of peering with the rest of the world. With Russia, less so. With a "normal" Western country, likely infeasible, but in the latter case the internet access is not controlled and weaponized by the government, so there is no real need, self-regulation suffices.
"Nationalizing the internet like the telephony system" is a meme and agenda among some ITU people, originating from China and Russia.
And now the American FCC comes with this... Perhaps the FBI's counter intelligence branch should do a better job because this smells fishy.
They need to. Honestly we need a way to completely segregate Chinese and Russian networks off, as well as anyone who peers with them. They are using our open networks to brazenly attack us in broad daylight… it’s time to fight back.
How many ways do they have to transmit to Western networks? Proxies, tunnels, rooted machines, sending balloons with 5G modems that dwell in our airspace for days? I would much rather see a 100:1 defensive effort in operating system security. Lockdown mode in macOS is like Spinal Tap's amplifier that goes to 11. Why not just have 10 be the most secure, and make it go up to 10?
coretx’s “…a idiot…” and “privatize everything and force non-owner classes to capitulate to private power” make it clear coretx is exactly the kind of idiot no one should peer with
Government is a “multi stakeholder” model. Whether that or private you get a committee determining based upon the biases of the committee.
Same old human bullshit all the way down no matter the semantic bullshit that annotates how the technical decision was made.
Wouldn't regulating who you can peer with potentially violate the 1st amendment? Since you're effectively regulating who someone's[0] computer can talk to?
The 1st Amendment is not an absolute bar on any restrictions on speech whatsoever. It is a bar on state action to limit (or compel) protected speech on the basis of content. I could load you up with legalese here, but the rather simple explanation of the matter is that, if it's not restricting speech for political opinions (in a very expansive sense of political), then it probably will pass constitutional muster.
If I have the underlying context right here, this is effectively regulation prohibiting people from lying in the course of their normal operations, which smells a lot like a typical fraud statute, and government restrictions on fraud-like things have almost always been upheld as constitutional. (The main exception I'm aware of is US v Alvarez, but even there, SCOTUS said it was only a problem to ban lying for lying's sake; banning lying with the purpose of getting a benefit is acceptable).
Like most things in law, it's not straightforward as "it's speech so there's nothing the government can do about it". I don't know the case law on this, but given that the FCC has been around for decades, most of their regulations are about what/how companies/people can talk to, and their authority hasn't been struck down, I'm going to guess that they're on solid legal footing when it comes to enacting regulations.
It's a power grab in tge same sense as when people got outraged that everything on HTTP suddenly had to have PKI and x509 certificates. While x509 is shit, ASN1 an outdated mess, nobody would today argue that TLS is bad.
That was definitely a power grab. Suddenly CAs had all the power. The ONLY reason we ever accepted a browser-mandated transition to TLS is that Let's Encrypt started existing. Without Let's Encrypt, this would have the effect of requiring every website operator to pay money to a CA, even with full self-hosting.
When Let's Encrypt eventually stops behaving in an altruistic fashion, we'll stop doing near-mandatory TLS.
I would argue that with mandated TLS, there is no such thing as "full self-hosting" any longer if you want to be part of the web. Everyone is beholden to a CA somehow
You need to ask central authorities to issue you a public IP, and almost certainly a domain name as well. I don't see how requiring a TLS certificate on top of that changes much.
You most definitely don't need a domain name and I have never been "issued" and IP in my life. It's assigned by my ISP for the duration indicated by DHCP.
Everyone's home internet connection now is very fast. They are very capable of hosting a static website of files in folders from home as a human person without CAs being involved. And it's much safer than, say, running Chrome w/javascript on and visiting a random website.
Chrome, Brave & other browsers now refuse to download content from HTTP websites. In the coming years, HTTP will be fully deprecated on consumer platforms for browsing the web
Except they aren't fully capable, because it's considered insecure and certain features are disabled, and all options to bypass this was deliberately made extremely difficult to bypass. It would be even worse if Mozilla's original plan had succeeded - making TLS completely mandatory on the Internet, so that http URLs would only be accepted for localhost and RFC1918 IP addresses.
Yes, people stuck using a corporate browser might have trouble accessing non-corporate websites because of those browser's loses of capability. It sucks. HTTP webservers are fully capable but realistically every human person should do HTTP+HTTPS for the best of both worlds.
Regardless, for HTTP there's an entire 'year 2000's web' worth of people out there using browsers that can access the non-corporate web. As the corporate web diverges with things like stateful HTTP extensions, UDP HTTP, and CA TLS only implementations (if not spec) there's no need to switch to a niche protocol like Gemini. Just making a normal website on the web is enough to bring back the old small web environment. As a human person not operating under a profit motive and just doing things for kicks this is just fine. A silver lining like how usenet is actually good again now that most people don't get it from their ISP.
Yes, Mozilla is very corporate. These days it's primary design considerations are about executing javascript to buy things or watch DRM media. It's use cases cannot allow for it being a web browser for human persons. It uses cases are all about commerce now. But there are fine Firefox forks that split off before it just became a chrome copy.
Self-hosting a website means other people can use your website. That person is right. Let's Encrypt controls the web now, and we only put up with it because it's acting altruistically right now. But that's also how we used to treat Reddit and Twitter and Freenode and how we still treat Discord and Hacker News. Only in one of these cases did users manage to wrestle power back when the owners turned sour, and that was only possible because the ruler forgot to pay his keys, so they defected.
No one seems to realize that Lets Encrypt is a single point of failure now. You compromise them and you compromise a huge percentage of traffic on the web.
Their root CA can't really be revoked either unless you want to turn off most of the internet. Not that big CAs would ever suggest anything like that in the name of "security"
TLS is not bad, but CA TLS is for the vast majority of human persons on the internet. It makes things very fragile and only last a couple years max without constant mantainence. This is okay for commerce and institutions but for human websites it is devestating. Human persons just don't have the same use cases or threat models as a megacorp and cargo culting megacorp style solutions to everyone is bad.
The threat model of plain http is about the user (and injected content) and not the server. Plain http is a threat to users.
Has it really been so many years since people used to do tcp injections and walk around coffee shops or whatever returning results faster than the internet server and injecting goatse?
Nope, you've just applied your application model of the web to the web as a whole. Most websites are not actually applications and the risk of "injected content" to people with javascript turned off is very, very low.
The risk comes from the absolutely bonkers corporate/institutional use cases of automatically executing all random unverified programs sent to you. When you remove this crazy use case suddenly all the problems (and bonkers requirements) go away.
Seeing a goatse is not the end of the world and actual MITM injection attacks like you describe are rather rare now. I miss when wifi use to be open and that issue mattered.
Internet community has had plenty of time and opportunity to self-organize sufficient security measures. The need for security has been long recognized, and it's understandable that powers that be are getting impatient. While obviously it would be preferable that industry would voluntarily do stuff, if they don't then I guess it is justifiable that they get regulated.
BGP operators _have_ self-organized sufficient security measures. Compared to just about any other attack vector on the internet, BGP hijacking is among the least likely to impact most people.
Given that accidents involving BGP within the past several years have led to worldwide outages in the world's most used websites this is just not true. Also thousands of known bad announcements occur every year, which are usually used in a very small window to send large volumes of abusive advertisements.
There's no reason not to force the industry to hold people accountable for false announcements. The privilege of announcement should be acquired by posting a significant amount of capital as a bond, from which damages can be removed when a system makes a false announcement. The vast majority of damages are a result of network operators on the subcontinent -- it is high time we figure out how to make them take the issue seriously, and pay out the nose until they do.
Threat models matter. If you're defending against nation state actors in a military/cyber context it is an essential part of the overall defense strategy. Ignoring BGP on the grounds that "it was always insecure" is then just weird, if not reckless.
The SCION project (Iirc from ZTH) solved all of this and also has been extensively tried in the field.
If you are defending against a nation state, you should be worried about your staff being bribed, otherwise coerced or worse just being foreign agents.
In properly run networks, BGP hijacks shouldn't have a noticeable impact.
But what I do have are a very particular set of skills, skills I have acquired over a very long career. Skills that make me a nightmare for people like you
> Compared to just about any other attack vector on the internet, BGP hijacking is among the least likely to impact most people.
But when it happens, it impacts massive amounts of people - about once a year on average [1]. Sometimes it's censorship gone bonkers, sometimes it's likely a three-letter agency, sometimes it's fat fingers, and sometimes it's cybercriminals attempting to loot cryptocurrency wallets.
BGP hijacking also works on a small scale. If you don't use DNSSEC and somebody hijacks the prefix(es) that host your DNS, they can obtain a letsencrypt certificate and redirect all your traffic.
If you don't have a CAA record and somebody hijacks the prefix(es) where your webserver is hosted, they can also obtain a letsencrypt certificate and redirect your traffic.
I'm not sure you're following. An attacker who controls BGP controls, for some small (or large) section of the Internet, the meaning of IP addresses. No DNS validation gets you around that.
LetsEncrypt does in fact do things to mitigate this attack, but they have nothing to do with DNSSEC: they do multi-perspective lookups, so you'd need Internet-wide routing control.
It seems you miss something, maybe because you don't consider DNSSEC as something that gets actual use.
With DNSSEC, somebody can reroute traffic all they like, they cannot generate fake DNS responses that are DNSSEC valid for DNSSEC secured victim domain. So if the CAA record is properly set to only allow the dns-01 validation method for ACME, there is simply no way to obtain a false certificate even if the attacker controls all of BGP.
That issue isn’t a dnssec problem, it’s that Let’s Encrypt was not familiar with the route hijacking threat model. It was pointed out early to them and they ignored it.
Why blame LetsEncrypt? Instead blame the operators who are refusing to address basic network security.
I run a network, we do the whole shabang of RPKI, DNSSEC, and CAA. It sounds a whole lot like operators who refuse to address clear security issues. LetsEncrypt is not to blame when someone spoofs your address space.
LetsEncrypt is not a LIR/RIR, their business is not IP resources but SSL certificates. They are a CA. They have no tools available to them to address that problem.
If you set the CAA correctly, then letsencrypt will limit validation to the dns method. Together with DNSSEC that is enough to prevent issuing certificates in case of a route hijack.
Legacy prefixes do not support RPKI unless you sign ARIN’s registration agreement and agree to pay. Many early IP address holders (including myself!) have never signed.
You don’t have to pay a dime. But don’t expect the rest of the Internet (that DO pay for their resources) to continue to guarantee reachability to your address space.
If you won’t get on board with RPKI/IRR you can’t cry foul when the rest of the Internet is paying the price to be reachable.
I am a resource holder and I pay my dues. I have no problems with paying for that privilege.
Internet access is not an inalienable right. It is a privilege. Even as it’s become increasingly more and more of a utility. Until laws start to reflect that, it is still a privilege at best.
Edit: before someone says anything about the trust anchors. Reminder, There are two overarching namespaces to the Internet. IP and DNS. You are free to ignore the authorities of both but don’t expect the rest of the Internet to play along when you want to use .billybob as your TLD.
As long as RPKI "unknown" / not-found prefixes are able to be globally routed, I will not pay. I have a legacy ARIN /24 from the 90's. It was cheaper for me to get an ASN and IPv6 block through a RIPE LIR than go through ARIN.
As for IRR, one of my upstreams created an RADB entry for me on behalf of my ASN, so not too concerned there.
Yes. Whether or not a particular standard has been implemented is not interesting. What matters is the result.
Is BGP an attack vector that matters for the vast majority of threat models right now? I would say no. Given that: there is no need for (inevitably) poor regulation.
If your operation includes communication over internet, bgp hijack is in your threat model (or your threat model is incomplete). I don't understand how "endpoints we care about may become unreachable" is not a big point for everyone. (Unless your business is extremely async and a day of delays is insignificant)
By this logic, I should be concerned about defending against raccoon attacks since they are endemic to my area and I often go outside.
The point is that, in practice, the attacks are so uncommon and mitigated by so many other factors that the cost involved of further mitigation it isn't worth it.
You develop a threat model to specifically get rid of concerns like this; not to list every possible attack vector imaginable.
What does the cryptography add? RIRs publish who owns which prefixes, and sign this list. If you're America worried about foreign countries hijacking BGP, you can ignore announcements received from overseas about prefixes that are owned by American actors, unless they add a record indicating they expect to deploy them overseas. You don't need any additional cryptography for this, or any protocol change.
Political routing is a faux pas. Also, not the state(s) is/are a authority.
The RIR's are. They operate a realm, and nation state and/or individual interests don't end at a border. The realm doesn't even have them.
Never the less, you are right about the need for more information being shared although it's hard to oversee what new security issues that may introduce.
You might think the state is not an authority, but everyone else who thought that got jailed by the state, so it seems to be the case that it is one. If the state and some protocol disagree, and the state has the power to imprison people using the protocol if they don't cooperate to subvert it (something like this happened to Ethereum IIRC), the protocol has to just deal with it.
It's outright crazy to see US diplomats work their ass off globally only to see some lower institution ( The FCC ) with less intelligence, capabilities, etc. undermine their work and formal US geopolitical grand strategy & policy.
I'll have to learn more about the protocol but I immediately distrust anything with things called "trust anchors". Sounds like the kind of high-value-target that attracts corruption.
At the router level you can do Geographic IP filtering, and for protecting your core router there will almost always be some firewalling (eg. pfSense) but it ain't foolproof.
A WAF and any other Perimeter security product can be used to enforce geoblocking (and other sorts of filtering) from an inbound standpoint at L7 (and why they are increasingly being subsumed under the API Security/Gateway segment or the SSE segment if you want to merge L3/4 and L7 security capabilities)
> I think either you misunderstand me or I misunderstand you
This entire discussion is about the Internet itself, not companies that connect to it: how does the Internet know which direction to send traffic in? It's managed by a protocol called BGP. Other countries can say your addresses are present in that country, and steal your traffic.
I think mandatory ip source checking would be a nice addition too for many ISPs that still don't, and now allow tons of crap spoofing DoS attack traffic.
Doesnt matter if its asymmetrtic or not. You should not be allowed to send egress from prefixes that dont belong to you and that you dont advertise to your upstream. Thats what strict rpf is for. Else you are spoofing.
I mean any ISP can check if a packet leaving their network is actually a network they have under their control, routing doesn't have anything to do with that?
End ISPs can. If you're an ISP supplying internet service to a business you can and should block source addresses that don't belong to that business, on that business's line.
It doesn't work in the "Internet core". If you're Verizon and you got a packet from Comcast that says it's originally from Cogent, how do you validate that? This router that happens to be checking the packet prefers to send packets to Cogent via Sprint, but that doesn't mean Cogent also prefers to send packets to you via Sprint. Each router can have a different preference, too. (Example scenario only)
Ah I was only thinking hosting providers/consumer ISP yes, not transit parties. But then they aren't also the source imho. It would help already quite a bit if the parties that can, do the source checking.
I think many providers now also limit UDP which might become an issue when http/3 gets more adopted.
Tier 1 providers means the Internet backbone. These are the networks whose core business is interconnecting the whole world. Think Level 3 and Hurricane Electric. It makes zero sense for a Tier 1 provider to deploy reverse route filtering.
(Tier 2 providers are those with lots of interconnections, but not the whole world, just regionally. Tier 3 providers are those who just buy wholesale internet service from another provider and don't have many if any other interconnections.)
No sane provider restricts UDP. You might be thinking of one of the more obscure protocols like SCTP.
I am not pro russian but you've got a point. The amount of hacking attributed, should we believe, to North Korea is beyond belief: we're supposed to to believe that a country which cannot properly Photoshop a picture (to make believe they've got military hoovercrafts!) and where there are still people eating roots, and where they're best computing feat is to change Red Hat Linux's background wallpaper is...
Full of top-notch hacker infiltrating all the western world's infra?
> we're supposed to to believe that a country which cannot properly Photoshop a picture (to make believe they've got military hoovercrafts!) and where there are still people eating roots, and where they're best computing feat is to change Red Hat Linux's background wallpaper is...
That's a 1990s and early 2000s image of North Korea.
Sanctions are way less biting in NK now that China, Vietnam, and Russia have become much more affluent compared to back then.
It's fairly common for the North Korean government to send their top talent to study CS in China, Russia, and Vietnam, and work as unofficial contractors as well as conduct attacks from abroad.
Vietnam used to be a very common source of attack for that reason, due to it's permissiveness for Chinese and Korean (North and South) visitors, and why Vietnamese IP blocks tend to blocked by most western WAFs, and North Korean MSPs have been caught operating in Vietnam a lot.
Also, poverty never stopped a country from building a strong MIC. Look at China, India, and Pakistan's domestic MIC capabilities which began being built in the 1970s-80s.
To quote former President Zulfikar Ali Bhutto of Pakistan - "We will eat grass, even go hungry, but we will have our own atom bomb" (early 1970s).
I also wouldn't judge technical prowess based on inability to use Photoshop or other basic tech correctly. Lots of Chinese and Indian government websites are riddled with misconfigured access controls, open ports, and unpatched stacks yet it doesn't mean these countries don't have the ability to innovate military technology.
Low overall skill level in the populace is not the same as isolated competence.
And in software, you really only need isolated competence. We've seen repeated examples in the West where a team of 10-20 highly competent engineers is able to run circles around 10,000 person orgs filled with bureaucrats, managers, and questionable hires.
NK sends a few people to China to train, or even just imports from China, and if they prove themselves capable, says, "We're going to make you part of our elite spy hacking force. We'll pay you $10 million/year." Suddenly the highly competent hacker is living like a king and NK has their spy force. Not so far-fetched.
I agree in part — a lot of the attribution is extremely weak and based entirely on some correlation of previously weakly attributed tactics, techniques, and procedures (TTP in security parlance). Also, I think the effort in describing all these threats in this way is misplaced, but that's an entirely different rant.
The counter is North Korea in actually capable of pulling off these attacks because of just how bad things are on the internet and how little skill is actually required to pull off a devastating attack. Even in situations where there isn't active exploit development programs in-country, exploits and exploitation frameworks are available on GitHub or for purchase. We have no idea what sort of controls are in place to prevent someone like North Korea from getting access to Pegasus, Core Impact, or Canvas VulnDisco exploits, plus the support and tooling they receive from friendly countries like China.
I can tell you for a fact that NK is not responsible for most of the reported hacks attributed to them, and this is simply a case of 'point at the easy to blame entity' the US plays.
The problem with regulating BGP security is that there will always be some Elbonian sh*thole that won't care what their networks spew out and that consequently, things will only get worse for the people that already play nice, while nothing changes for the hijackers.
The idea is that if Elbonia doesn't improve, America will de-peer from Elbonia and reject indirect routes as well. Elbonia will voluntarily improve so that it can be allowed to connect to America. Basically the same way we enforce things like the US dollar and trade sanctions.
In other words, BGP security is serious business, but that doesn't mean we can't have a little fun with it. After all, a little laughter is the best medicine, even for the internet.
The FCC and by extension the USA should GTFO and let the multi stakeholder models do their work.