Author here. Allow me to extend the post a bit. It turns out that about 2.4% of the IPs that respond to SSDP queries, do so from a weird port number! For example:
IP 192.168.1.75.50950 > 239.255.255.250.1900: UDP, length 95
IP 192.168.1.71.1026 > 192.168.1.75.50950: UDP, length 249
The first packet is SSDP M-SEARCH query. The second is a response from my printer. Notice - the source port for the response is not 1900 (but the dst port is okay). I'm not sure what the spec has to say about it, but it's pretty weird. What's worse - these responses won't be matched against "sport=1900" DDoS mitigation firewall rule.
I'm not sure what is the moral here. But if you ever see some UDP packets from a weird port, to a weird port - maybe it's this SSDP case.
You're half right. In most cases, programs have the OS pick their source port, but that's for the computer initiating the communication. So for example, in the communication he gave the 50950 was likely picked by the OS (by selecting a currently open port) and 1900 is the destination port. When the remote computer responds (his printer), they don't then pick a new random source port, they just swap the source/dest from the previous message.
If they didn't keep it the same then the OS wouldn't always be able to know packets are part of which connections, because you can multiple connections open with the same computer at the same time to the same dest port, and the only differentiation would be the source port.
UDP does not have connections, but the OS does have a concept of UDP connections to a degree in the form of packet filtering/routing. Point being, if you send a DNS request (for example), the source IP/port and dest IP/port is how the OS will decide which packets to route back to you when the DNS server responds. If the responding DNS server changes the source port, the OS will not route that packet to the original socket because the source port does not match. You can still make it work, but you would have to be already listening for packets from that port (one way or another), so you would have to know beforehand they are going to be using a different port.
> ... or some request/session/flow id in the application layer protocol. Some UDP protocols use UDP in this way, some don't, UDP itself doesn't care.
You still have to get the packets though, and the OS had no idea about any application layer routing. If you want to get UDP packets from a bunch of different ports, you have to be listening on those ports.
Edit: It's true I was playing a bit loose with the terminology (UDP is connectionless), but the behavior of packet routing and how changing the source port would mess with that is what I was getting at. If you want to be more correct, replace "connection" with "socket" in my original comment.
> UDP does not have connections, but the OS does have a concept of UDP connections to a degree in the form of packet filtering/routing.
The filtering is completely optional to use.
> Point being, if you send a DNS request (for example), the source IP/port and dest IP/port is how the OS will decide which packets to route back to you when the DNS server responds.
That depends on how the requesting resolver has configured the socket.
> If the responding DNS server changes the source port, the OS will not route that packet to the original socket because the source port does not match.
That depends on how the requesting resolver has configured the socket.
> You can still make it work, but you would have to be already listening for packets from that port (one way or another), so you would have to know beforehand they are going to be using a different port.
Yes, obviously you have to know the application protocol you are trying to speak and how it uses UDP before you try to speak it.
> You still have to get the packets though, and the OS had no idea about any application layer routing.
Which is why application layer routing is called application layer routing.
> If you want to get UDP packets from a bunch of different ports, you have to be listening on those ports.
No, you listen on local ports, not on remote ports.
> If you want to be more correct, replace "connection" with "socket" in my original comment.
Well, technically, some minor details would be more correct - but the fundamental assumption that you can only receive datagrams from one remote address/port with a given socket is just completely and utterly wrong, and not just in the sense that it's a theoretical possibility, but it's a perfectly normal use case. To take an obvious example, a common configuration for an OpenVPN server is to accept authenticated packets from any remote address and automatically switch to changing remote addresses for the sending direction, so when the client changes addresses, the OpenVPN session just keeps going.
As long as you don't connect() a datagram socket in the BSD sockets API, you will receive datagrams from any remote address (and you'll have to specify remote addresses using sendto() when transmitting).
It's seems the context of my original comment just went way over your head. Yes, obviously, if the protocol is defined to allow for varying the source port then yes it will work because you specifically write your program to handle that. But the person I was responding too was asking in the context of protocols like SSDP, which is not defined that way. And he was asking if you could vary the source port anyway even though the protocol doesn't support it, and I said no and explained why that wouldn't work.
> and I said no and explained why that wouldn't work.
And your explanation is at the very least misleading, bordering on wrong. Pretty much noone (except where the protocol spec explicitly were to require such behaviour, maybe) would implement a client that would open a socket per server/per request, but bind them all to the same local address/port.
Either you use one socket for all requests, in which case you don't connect(), so you receive all the responses, and thus would also receive datagrams from addresses/ports that you didn't send to, and instead you would do the matching of responses to requests in userspace, even if potentially based on the sender's address/port.
Or you use one socket per server/per request and let the OS assign you a free port per socket, in which case the local address/port is perfectly sufficient for the OS to route received datagrams to sockets. In the latter case, it's common to simplify your code by letting the OS handle the filtering of source addresses, but that's all it really is, filtering--actual routing based on remote addresses by the OS is not what normally happens and not why varying the response source port would not work with many protocols.
When the OS picks a random port, it's from a pool of Ephemeral ports. Which can vary from OS to OS. My assumption is that printer responding with a different source port breaks the communication over NAT. Is it a possibility that this is intended?
Well I mean, you're correct it would break the communication over NAT, but it breaks the communication over the local network at well. You might still receive that packet to your NIC, but unless you know beforehand it will send using that source port and thus tell the OS (by setting up a socket or etc. to collect that packet) then the packet will never be routed to your application. So I don't really feel like this could be intended because it just makes stuff not work.
More casualties from BCP 38 failures. This article mentions it but then dilutes the importance of it by suggesting SSDP is a problem. If IP spoofing did not work on the Internet, none of these UDP reflection attacks would work.
A scheme to strong arm the adoption of BCP 38 is key to stopping these attacks from growing. IoT has shown us that expecting device updates to disable these UDP protocols is a lost battle.
"A scheme to strong arm the adoption of BCP 38 is key to stopping these attacks from growing. IoT has shown us that expecting device updates to disable these UDP protocols is a lost battle."
Easily done: "Follow some standards and RFCs or get put on a global blacklist of companies to not do business with."
You don't have to have a router. The apartment building where I live have fiber, with twisted pair to each apartment. DHCP leases from the apartment gives you an external IP. It's possible to hook up a switch and get DHCP leases for multiple devices. I assume there's an upper limit, I've only tried it with two devices.
Now, let's say I hook up a printer to a switch in that configuration. Is it smart enough to not respond to UPnP coming from globally routable addresses?
> Now, let's say I hook up a printer to a switch in that configuration. Is it smart enough to not respond to UPnP coming from globally routable addresses?
WTF? How would it not be utterly idiotic to not respond to UPnP requests from globally routable addresses? Why should it be impossible to print from some machine, just because it has a globally routable address?
"Why should it be impossible to print from some machine, just because it has a globally routable address?"
Because it's an Internet.
Sure, it's uncommon behavior and not what most people want, but let's not completely give up on the notion of being a peer on the network.
The printer serves up (printing) just like a web server serves up web pages. You should be able to run a web server and participate as a peer, globally.
Consider a printer (Or even an external disk) connected via USB to the router. A lot of home routers support sharing said printer over the network and some of those routers probably answer to UPnP requests on the WAN port as well.
The security standard we use for everything in our network is: if it would be insecure if hooked directly up to the Internet, it is broken.
Firewalls encourage poor security by creating a false sense of security and leading to developer and system administrator complacency. IMHO it would be better to get rid of them and let the insecure junk burn. To prevent DDOS exploitation the best would be to have grey hats take the latest exploits and mass-brick exploitable devices.
We'd learn our lesson and then we'd have secure devices.
NAT and pray still leaves you screwed under IPv4 these days - attackers know how to bypass NAT-without-filtering.
(I don't think firewalls are a good solution in general, but I would agree that they might be the least-bad way to handle crappy embedded/IOT-type devices).
The reflector attack in question cannot bypass a NAT setup in any meaningful way. Yes, there are tricks that make some protocols NAT-inspectable. It's not perfect. But as a default behavior it's proven surprisingly strong. Typical IPv6 deployments are significantly less secure, sadly.
On the other hand the main reason we have UPnP at all is to deal with the need to work around NAT - maybe these vulnerable devices simply wouldn't be running a UPnP stack at all under IPv6.
The same way people run p2p clients like BitTorrent from home, or the nerdier people run home servers with SSH, RDP or HTTP(S) exposed: they just use the port forwarding features available on every single consumer router. Sometimes software / hardware will automatically assign a port forward via uPnP[0] (this is what many P2P clients will do), sometimes it's done manually[1][2][3][4]. The only difference here is the datagram, but routers have adaptive stateful firewalls (else services like FTP would fail despite being TCP) so they can handle the stateless nature of UDP just fine.
"The vulnerable IPs are seem to be mostly unprotected home routers."
I interpreted this as routers that themselves have uPnP implementations, probably intended to advertise themselves to clients on the local network, but that listen on all network interfaces by mistake.
It will be years and years until those vulnerable miniupnpd versions are updated. Most are in embedded devices which will never see another update.
I'm glad to see miniupnp is still in active development: https://github.com/miniupnp/miniupnp but I can't work out if it's set to be vulnerable by default.
If a device is listening to UPnP on the WAN interface, the fault is not on UPnP but on whoever configured it to be open on the WAN. IMO, all of these zeroconf protocols should be limited to responding back only to the local segment and not allowed to traverse gateways.
One of the documented use cases for UPnP is IGD which expressly allows UPnP devices to configure fire wall rules and to set up NAT to map ports to the outside world. So a UPnP device that wishes to expose itself to the outside world is able to do so and this is by design, not by accident. Whether you agree with that or not is another matter.
Agreed - but can there ever be any legitimate use case for an home router to speak IGD over its WAN interface? IGD is typically meant to allow your Xbox on your LAN to set up forwarding rules.
None. Exposing miniupnpd on a public interface is always a misconfiguration. I'm disappointed that the application even allows it -- it has to know which one is public to set up IP forwarding rules, so it has no excuse.
> Internet service providers should never allow IP spoofing to be performed on their network. IP spoofing is the true root cause of the issue. See the infamous BCP38.
I don't see how it is at all reasonable to shift blame from a protocol that assumes the world can be trusted to the untraceable goal of "every single network in the entire world should only generate trusted data: then the problem would be solved".
> Internet providers should internally collect netflow protocol samples. The netflow is needed to identify the true source of the attack. With netflow it's trivial to answer questions like: "Which of my customers sent 6.4Mpps of traffic to port 1900?". Due to privacy concerns we recommend collecting netflow samples with largest possible sampling value: 1 in 64k packets. This will be sufficient to track DDoS attacks while preserving decent privacy of single customer connections.
OMFG. Do you want deanonymization attacks? Because this is how you get deanonymization attacks :/. The right form of solution here is not to encourage ISPs to log even more of our traffic (a practice I wish were illegal), but to try to kill off UPNP through every form of leverage possible (even if it breaks things).
I'd say this is "so disappointing", but I guess I shouldn't expect much from the company that tried its damndest to argue that nothing of importance was leaked from Cloudbleed even when you could still recover Grindr requests complete with IP addresses that they had managed to leak well after they tried to claim that data had been scrubbed :/.
- We will always have DDoS vulnerable UDP protocols. In past we had DNS. Then we had NTP. Now we have SSDP. The next one is going to be some gaming protocol. We should fix them as we go, but a more comprehensive solution it to actually fight the spoofing.
- Even without using amplification, with IP spoofing it's possible to launch a direct attack, which will be untraceable. We regularly see 150Mpps+ packet floods going _direct_ from the attackers to our servers. The ISP's are clueless. There is no way for anyone to trace the true source of the attack. (without netflow, that is)
This brings us to second point - netflow. You say - the ISP's are incompetent, they do not have netflow and this is _good_. No it's not good. The ISP's can track you / deanonymize anyway, but when I ask them: "hey guys, I see this 150Mpps flood from your network, can you do something about it?" they say - "no, we can't identify the source because the IP's are spoofed". Yes, I herby recommend that each of the ISP's should take care of their network. Be able to answer historical questions about DDoS. That means the netflow collection point will have statistical metadata about customer connections (1 in 64k connections will have saved data - source port/ip, dest port/ip, length, packets, bytes). This might be used to attack your privacy - but the ISP can do much worse things anyway. Doing netflow right will allow us to finally trace the IP spoofing.
I really think that DDoS is a threat to the internet as we know it. Think about centralization that it causes: can your server sustain trivial 100Gbps SSDP attack? I really think that doing netflow right will allow us to keep the decentralized internet.
> We should fix them as we go, but a more comprehensive solution it to actually fight the spoofing.
The problem is that the Internet as designed simply supports this, and you can't fix it unless you fix the entire Internet at once; this problem is harder and less realistic to fix than any other place to poke at the problem, specifically due to the entire nature of the attack: it is an amplification attack... so I only need to find--somewhere, anywhere--a smattering of Internet that still supports spoofing, and use that to launch my attack.
> The ISP's can track you / deanonymize anyway...
They can, yes; the question is how much they do and if they should: I believe that it should be illegal for them to do this, and in a more perfect world on a more perfect network I believe it should be impossible for them to do this. The idea that you seriously think that not only is it OK that they do this but that they actively should do more of it, in all honesty, sickens me: we should be striving for a world where the list of reasons an ISP "should" track you--the list of reasons people feel they have to--is empty.
> That means the netflow collection point will have statistical metadata about customer connections (1 in 64k connections will have saved data - source port/ip, dest port/ip, length, packets, bytes).
No: that's not what this article says, and that's not how NetFlow works. You are proposing logging 1 out of every 64k packets, not 1 out of every 64k connections. Connections are made up of multiple packets--at least 4--and are sometimes made up of many many packets (the average I read was ~100 packets per connection, though I'm sure that falls into some inverse power loaw). So you are logging way more connections than 1 out of every 64k, and there are known attacks even on networks like Tor that are based on having this NetFlow data to correlate connections.
> Think about centralization that it causes: can your server sustain trivial 100Gbps SSDP attack?
The only force of centralization I'm seeing in either my experiences or your presentation is the marketing that comes out of CloudFlare and the, as far as many of us can tell, bending of the truth as to what even constitutes an attack that is used to up-sell existing CloudFlare customers. It is one of the reasons why the only websites you ever really notice being attacked are ones behind CloudFlare, because CloudFlare really really wants random other people to notice that they are "helping".
FWIW, I have absolutely been the target of SSDP attacks, and have been concerned about this protocol for a long long timeand it isn't obvious to me how "let's centralize more" is the real answer to the problem: if you really want to protect yourself from a DDoS attack, the obvious solution is to decentralize, not centralize... the more centralized you are the more you have a target that can actually be taken down. As a visceral demonstration of this, I dare you to use an SSDP attack to take down Bitcoin: the only reason that this attack is a conceptual problem in the first place is that people like to centralize things.
Right, sounds like our fundamental assumptions differ.
For the great majority of internet users buying more capacity around the globe to sustain a 100Gbps SSDP is not an option. If you run a mildly controversial website, you don't expect to pay much for idle bandwidth. You can go for hosted solutions, but then you will be charged for attack traffic. What I'm proposing is a solution to this problem - how can we make the internet safer for the most common use case. I propose: netflow (to identify the spoofing boxes), flowspec (as stop gap measure), and BCP38 (a fundamental issue) will get us a long way.
If we were to design HTTP from scratch we could discuss how to make it truly decentralized. This sounds like an academic discussion though.
Your second argument is that DDoS is not a real problem. I don't know how to assess it. Dyn was down. Krebs went down. These are facts. I'm definitely not the guy that shouts "we are all doomed! buy product A or you will go down". All I say is - this is what I see, this is what happened, here are the numbers. Read the data and assess it yourself I guess!
> Even without using amplification, with IP spoofing it's possible to launch a direct attack, which will be untraceable.
A long time ago, there was a proposal (itrace, its latest draft was https://tools.ietf.org/html/draft-ietf-itrace-04; see also http://ccr.sigcomm.org/archive/2001/jul01/ccr-200107-paxson....) to make these attacks easier to trace, by having routers probabilistically emit ICMP packets towards the supposed target or source of a packet. From what I recall, as DDoS attacks moved from IP spoofing to zombies using their real IP address, the working group sort of lost its purpose and died.
Question for any network admins here: I enabled IPFIX on our Juniper MX series routers for that exact purpose, but it contains Layer 3 info only (no MAC addresses!).
What am I missing? For now, I'm getting the info I need from sFlow but I want to get rid of that ASAP.
You might be happy to hear that the ISP I work for can definitely identify where that 150Mpps flood came from :) We're even doing some automated outbound mitigation in order to be good net citizens. CloudFlare's blog articles definitely helped us improve our network-level DDoS mitigation, by the way! Thanks for that.
IP spoofing is the problem. Even if you completely ban UDP protocols, it still allows for anonymous unamplified attacks.
Attacking every single protocol that dares to respond to a query is a pretty stupid approach IMO. Look how well it's worked so far.
Additionally, unless we switch DNS to TCP only, root and authoritative name servers are always going to provide and amplification factor and there are still more than enough of them for devastating attacks.
Agree with you about monitoring, but that wouldn't be necessary if we got serious about enforcing and blacklisting ISPs that drop the ball on BCP 38.
1) Hardware. ALL routers performance degrades. Sometimes up to unsuitability.
2) Software. No commonly agreed way to maintain route and route6 object. No federation for them
3) Administrative. Lack of network hygiene.
Keyword: BCP38
Relevant document https://tools.ietf.org/pdf/bcp38.pdf
It's not few. It's a very common mistake. Say I have 20 10G ports on line card. Mid size ISP has from hundreds to thousands routes. Thus we will end up with tens of hundreds ACLs per line card. That affects performance, dramatically.
Say on older yet supported Cisco 7600 you can apply about 10k entries per port, ~ 100k totaling per card. That's limit, after that performance degrades.
Another question what to do on peer-peer links, like say NTT<->Centrylink? Both on them have interconnects across the globe ad sending each other hundreds of thousands prefixes on 100G ports. That's the challenge as well. Hardware is not there, it's coming slowly but not yet there. Most of the big players DO one or another form of ingress filtering on some ports. Valid point is that smaller players are arrogant ob the issue and dump all all the garbage to the upstream.
This is why handwaving and shouting "BCP38" is not sufficient. I could live with half-baked BC38 deployment as it is now, if only I had other tools to trace malicious actors - netflow.
Incase of huge DoS attacks they do something similar. But with DDoS attacks the spoofed packets are coming from so many different locations that it is hard to identify them, and often it is just stuff like compromised toasters.
I am not this kind of network engineer, HOWEVER. Both IPv4 and IPv6 are versioned by the first 4 bits of the packet. Depending on that value the address size and location are fixed.
I would not consider the comparison of the source address of packets crossing an ingress link to be 'deep'. I consider that check to be very shallow. It needn't even be every packet from a set, merely picking a random (actually random) packet and testing for conformity is a good quality control measure that SHOULD be taken.
What would the comparison be against? Routers are supposed to know which links are on the other side of all down-stream connections so that they can effectively route.
Why would your ISP allow you to send packets with a source address it hasn't allocated to YOU? That kind of check/enforcement is pretty cheap and simple.
Shameless plug: When I read about SSDP a little while ago I was curious to see if I'd encounter it on many networks. As I was also trying to learn Swift/Apple development, I've written two (non-free) little apps for macOS/iOS to monitor SSDP messages:
Ever since creating it and just checking on some networks, I'm surprised of how many devices are actually using it. I probably saw this in Wireshark before as well, but probably overlooked it because you're never really looking for it. I wonder if many other such protocols are often used but easily missed...
More on the SSDP servers
Since we probed the vulnerable SSDP servers, here are the most common Server header values we received:
104833 Linux/2.4.22-1.2115.nptl UPnP/1.0 miniupnpd/1.0
77329 System/1.0 UPnP/1.0 IGD/1.0
66639 TBS/R2 UPnP/1.0 MiniUPnPd/1.2
12863 Ubuntu/7.10 UPnP/1.0 miniupnpd/1.0
11544 ASUSTeK UPnP/1.0 MiniUPnPd/1.4
What an earth is internet facing and running 2.4 Linux kernels?
2.4? That takes me back to my misspent college years. When I should have been out in bars in partying, I was at home babysitting my 2.4 gentoo builds...
DD-WRT installed by wannabe linux nerds and forgotten about. "Yea I installed DD-WRT 6 years ago because it's better" Nevermind the vulnerabilities that are patched over the years.
Every single thread of this nature has a similar comment, and I really want to know (ie, I want to hear this fully fleshed out because I think your concerns are valid and worth exploring): is this demonstrative of a new (or in some way more valid) notion of the word "hacker" in "hacker news?"
My sense of that word, and of the culture that underlying it, is that a critical part of its critique is that obscurity, specifically in its implications for security (and thus, perhaps civility and peace and justice), is subject to deprecation in the information age, precisely in favor of styles of disclosure like this: where the pudding for the tasting is provided as the proof.
Are there good reasons to believe that obscurity (ie, keeping secret the means and methods of attack) is likely to be a viable defense in favor of civility and justice in the age to come?
I find it fascinating that the packets per second chart resembles an RC circuit's step response. I wonder if there is a good electrical circuit analogy for packets, packet size, and bandwidth.
This is so trivial that it really doesn't matter - it's just sending a completely normal SSDP request, code for which you could find in any implementation of the protocol.
I'm not sure what is the moral here. But if you ever see some UDP packets from a weird port, to a weird port - maybe it's this SSDP case.