It'd be neat to include options to simulate other pathological conditions often encountered in the field like:
- Multiple layers of NAT - overlapping un-synchronized timeouts are one thing ... but the REAL fun comes in when intermediate layers have the same IP ranges as IPs you are trying to reach on the "outside" of the NAT sandwich. All kinds of "interesting" things can happen, like the "software laser": http://catb.org/jargon/html/S/software-laser.html
- Stateful NATs/firewalls with bizarrely short connection / UDP association timeouts or that randomly forget connection state
- Shitty NATs/routers that go into bizarre failure modes when there are too many open connections, like forgetting early ones because they remember connections via a small ring buffer
- People who block all ICMP "because security."
- NAT + short DHCP leases = musical external IP addresses
- Small MTUs (<1500) like those imposed by PPPoE and other nasty encapsulation protocols, sometimes without ICMP errors for big packets because some jackhole blocked ICMP "because security."
All these things are common in the field. WHYWHYWHYWHYWHY
I also forgot TCP through TCP tunnels, which causes strange timing and flow control behaviors due to never actually dropping packets on congestion and double-ACKing.
What do you mean randomly block specific ports because "security"? All ports should be denied access except ports which have a justified business reason. Got a web app? The only thing open should be 80/443. There's no reason for SMTP to be open on the web server. Anything doing mail should be on its own MTA server. Least functionality per server. That's not even security. That's just good system administration.
At one of the enterprises that I've had the pleasure to work at, the network guys would randomly come up with some "concerns" about your firewall requests, and would just not include certain parts of your request.
So you might request ports 4000-4100, and find that 4007 is blocked, "because security".
I'm pretty sure the reality was that the firewall rules were a big hairball, and they were stepping in some other rule out in place a long time ago.
Skype has abandoned P2P, though I'm not sure why. One possibility is that MS now has middle boxes deployed at so many interchange and peering points that there's no benefit in maintaining the added complexity. Another is that it was to comply with surveillance/tapping requirements.
IIRC they said that it's because of the shift to mobile. You can't really do P2P when the majority of your clients sit on phones.
EDIT: [I can't reply to the comment below, so I'll add here]
It's most likely not a technological problem, but rather than data usage is limited on mobile and you don't want your user to pay for traffic they didn't use.
I keep hearing a categorical "you can't do P2P on mobile." I intend to fling myself at this problem like a drunk seagull soon, so I'll blog on it. I think it's possible, but with a number of special considerations around wakeup-quantization and general power management, some protocol augmentation, and a high tolerance for nodes appearing and disappearing.
The problem isn't power (well it is; but that can be overcome), it's spectrum and pricing.
Pricing: Most people in the US are charged for the data they use on their mobile devices, and thus would not want P2P used on their phone because it costs them money.
Spectrum: P2P is not a very efficient distribution model in a world where most clients are on asynchronous connections. Asynchronous connections exist because transmission spectrum is limited, so to maximize spectrum usage, telcos allocate more spectrum for downstream transmissions than upstream. But if everyone's phone is chattering all the time with P2P traffic, you're going to saturate the spectrum and reduce overall data speeds. This is why mobile is still charged on a usage basis: it discourages overly chatty applications.
P2P doesn't necessarily mean cooperative relaying or swarm distribution like BitTorrent. I agree that those applications are mobile-unfriendly with current batteries and cell networks. It just means you are talking directly to your peers instead of back-hauling to the cloud. You can have P2P where the only traffic you handle is your own. In that scenario total aggregate bandwidth shouldn't be that different from a cloud-backhauled app -- the only difference you'd see is in how many endpoints you're talking to. So instead of seeing 25mb transferred to/from one IP, you'd see it sprayed across a few dozen IPs.
The big hurdles I see are (in no particular order):
- Connection maintenance and keep alive. Basically this is bad, so you want to be more aggressive about shutting down unnecessary P2P links on mobile than you need to on desktop/server. Keep alive requirements generally suck anyway, and are one way NAT murders kittens.
- Restrictions around background tasks on mobile OSes (iOS is particularly onerous).
- Squelching inbound traffic from badly behaved or broken peers to avoid inbound flooding.
- Battery life and related concerns.
I understand it this way: cellular networks currently have their own "netiquette." It includes things like don't be too chatty, try to coalesce instead of spewing packets at excessively random times, etc. These things are less important on wired networks since they don't have the same resource constraints or bandwidth issues.
I guess a related question is why you would do P2P and not backhaul to the cloud? I can think of many:
- Reduced latency for things like AR and VR where latency matters a lot.
- Reduced bandwidth cost due to lack of back-haul, and back-hauling a pic being sent between two people in the same city 2000 miles to a cloud server is just offensively stupid anyway.
- Privacy and security.
- With P2P you could have a more open app model where apps aren't wedded to proprietary cloud infrastructure. They still work even without someone's cloud, etc.
If you're talking about P2P over cellular, I think it's just in general a bad idea. The way cellular networks are operated, you can't do device-to-device connections - they have to go through the tower for a number of (very good) reasons. That's where the latency comes in, and the wireless transmission latency can be an order of magnitude higher than the latency on a cross-country connection. Furthermore, the resource that is constrained is the amount of available spectrum - so we need to optimize for that. Bandwidth is effectively infinite from the tower to the cloud, but bandwidth from the tower to the device is the constraint.
P2P over non-cellular (i.e. Wi-Fi or Bluetooth) where you can actually make a direct device-to-device connection seems like it may have some use cases (messaging apps, etc.) But they're edge cases and by no means a common use case because it just isn't reliable enough.
I forgot about DNS munging shenanigans. Not only do some people selectively block DNS, but there are also cases where middle boxes and proxies pretend to be remote DNS servers but really aren't... or rewrite DNS traffic, insert fields, etc.
Are there any network administrators who don't consider NAT traversal to be a security breach? Making it difficult would seem to be a feature, not a bug, in most enterprises.
NAT traversal is a security breach that became a de-facto standard and is part of the SIP, STUN, and IPSec (extended) standards among other RFCs. It exists because NAT itself is an abomination.
Regardless of how good the tool is - that was an excellent demonstration clip! Within ~30 seconds I knew exactly what the tool did, why it was cool, and what I might want to use it for.
Startup landing pages could learn a lot from that single gif.
People talk about "fuck you money," meaning the amount of money you need to make before you can never work again. Personally, my "fuck you money" isn't money per se, but the fall of Comcast. When I build enough value in the world to displace Comcast, I will retire.
We need to see more work in the areas of mesh networking, layer-3 routing, and consumer networking in general. This tool is a good step. Personally I'm hacking on some openwrt routers right now -- I recommend everyone try it. The documentation is dense, with a friendly community writing it.
Just yesterday I noticed my bill had gone up to $147. I called and did the yearly "I'll cancel if you don't give me a discount" nonsense. An hour out of my day.
Btw, they had raised the price of my 5 static IPs to $24.95 (from $9.95). I have to lease their modem ($12.95) because of the static IPs. I did research and found out that my registrar (namecheap.com) has free dynamic dns, so I'm going to switch to that and give up my static IPs.
So, after the $40 discount I got yesterday, plus the extra $38 for the modem/IPs, I should be down to a reasonable rate again.
It is true, what I said. I have Business Internet and they will not allow activation of a non-Comcast-owned modem when you have static IPs. I know, I tried to do it twice.
Ah yeah, business side I have no experience with. Just wanted to make sure anyone buying on the consumer side knows this, as they tried to pull the same crap with me when it was not true.
The problem is that fair use and satire are defenses that must be argued. They don't automatically stop you from being sued. It can results in a lot of upfront costs and hoops to jump through before you are actually granted your fair use exception. You have to be pretty committed to your joke to actually fight something like this to the end.
Although if I were Comcast, I would ignore this entirely. I wouldn't want to risk the Streisand Effect bringing attention to this type of thing.
You can't ignore a trademark violation and keep your trademark. Trademarks can become "generic" (think Kleenex or Band-Aid) if not enforced, which makes them unenforceable. So no, they won't ignore it.
You just reminded me that I have been dragging my feet upgrading my two OpenWRT routers for the last few weeks. I guess I know what I am spending my lunch break on!
You really shouldn't do something like this without at the very least mentioning Kyle Kingsbery and the Jepesen test suite he wrote to validate how distributed datastores actually follow CAP or not.
For any devops / sysadmin / systems engineer: I highly suggest reading this if not only to understand failure conditions better:
The entire "Call Me Maybe" blog series is because that is Carly Rae Jepsen's stupid pop song, for which he named his test suite after. That and Kyle's posts are absolutely hilarious to read. Seriously, read them.
I had that thought as well. From poking around various IP law sites, DMCA is purely for copyright infringement, and can possibly open you up to more liability if you try to use it for trademark claims (what this would be), see http://www.lexology.com/library/detail.aspx?g=13f9814f-b56e-... for an example.
If Comcast wanted to take this down, it would be through a trademark infringement claim.
I was confused if this was something from the real Comcast, so it seems to be a clear case of trademark infringement (but it's not my call, and I may be wrong).
If GitHub agrees, then it would be a violation of their Terms of Service [1], section A.8:
"You may not use the Service for any illegal or unauthorized purpose. You must not, in the use of the Service, violate any laws in your jurisdiction (including but not limited to copyright or trademark laws)."
Technically, no. However, it wouldn't be the first (or last) time that a DMCA takedown was issued for the wrong reasons. I can't remember specifics, but I seem to recall DMCAs issued for name conflicts before.
Yeah, I used to do stuff like this with tc directly back in the day, but I could never remember the options. This wrapper seems to make it a bit more obvious what goes where.
I'm curious if anyone knows of tools like this that are more deterministic?
I'm in the process of writing a library at work that is basically implementing the CoAP protocol from the ground up and having something that I can script to force the same packets to be lost or delayed is something that I would find very useful for testing. I've been using Apple's network link conditioner, but when I do discover a problem, it can take a long time for the same scenario to happen again which makes testing the fix quite difficult.
I'm currently trying to think through how I'd write this to make the filters easy to setup [0], but if anyone knows of something that already exists, please let me know.
Google Chrome's device mode has a subset of this functionality where you can make Chrome throttle the connection simulating different conditions (GPRS, DSL, offline, etc.).
Access device mode by clicking the smartphone icon on the top left of the Developer Tools (F12) right next to the Elements tab.
It's very useful. I discovered this the other day when testing some client side scripts. I was able to simulate a 3G mobile connection and determine which parts were causing the intolerable load times.
Did this with dummynet on FreeBSD around 1999. Made it a bridge though, so everyone on the test network could "enjoy" 28.8kbaud dialup experience! Great fun.
Called it "the molasses network", which is what they should change the name to before Comcast smokes them.
It'd be neat to include options to simulate other pathological conditions often encountered in the field like:
- Multiple layers of NAT - overlapping un-synchronized timeouts are one thing ... but the REAL fun comes in when intermediate layers have the same IP ranges as IPs you are trying to reach on the "outside" of the NAT sandwich. All kinds of "interesting" things can happen, like the "software laser": http://catb.org/jargon/html/S/software-laser.html
- Stateful NATs/firewalls with bizarrely short connection / UDP association timeouts or that randomly forget connection state
- Shitty NATs/routers that go into bizarre failure modes when there are too many open connections, like forgetting early ones because they remember connections via a small ring buffer
- People who block all ICMP "because security."
- NAT + short DHCP leases = musical external IP addresses
- Small MTUs (<1500) like those imposed by PPPoE and other nasty encapsulation protocols, sometimes without ICMP errors for big packets because some jackhole blocked ICMP "because security."
All these things are common in the field. WHYWHYWHYWHYWHY