Hacker Newsnew | past | comments | ask | show | jobs | submit | shigginbotham's commentslogin

That's what many of the tests at M-Labs are trying to do (I encourage people to take a look http://www.measurementlab.net/) but packets don't traverse networks in a straightforward path. I think focusing on that data and trying to figure out ways to prove traffic manipulation and pinpoint consistent places/times/applications where it slows would be valuable. It's also worth understanding how the FCC currently measures speed and treats speed tests. It's not at all straightforward http://www.fcc.gov/measuring-broadband-america/2013/develope...

We should have a technically-dominated discussion about tests and the right way to look for problems, that we aren't having today because it's so wonky. We totally need data, but we also need to figure out what data will be most useful to prove bad behavior. For example, if my speed drops to 1 Mbps for 30 minutes during prime time but my speeds the rest of the day are 55 Mbps then that blip is heavily discounted, but shouldn't be given that I'm probably trying to stream video.


Indeed; ISP-level stats aren't fine-grained enough. We need route- and hop-level stats to find the real bottlenecks.


This actually isn't a net neutrality issue because back in 2009 when the FCC was writing the open internet order it decided not to mess with peering and interconnection disputes. So designating ISPs as common carriers could be a first step here, but it wouldn't be the only step.


Could you explain in a little more detail? I was under the impression that the FCC would fix prices for common carriers, and therefore they would have to at least charge everyone in the same manner.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: