When I read these I'm amused that people are surprised by Google's network. Its actual capability is nominally secret (my employment agreement and exit agreement both mention restrictions on revealing non-public details of Google's infrastructure) Except in this case there many publically available clues which should give you an idea here.
Consider that Google is investing in undersea cables [1] which nominally provide multi-terabit speeds between continents. And Google is cheap, really cheap, in terms of what they will pay for infrastructure which is why they started making their own switches [2] because even commodity ethernet vendors weren't cheap enough (although these days of state funded cyber armies one also has to protect against foreign made equipment [4])
What is more the amount of data you can send over a fiber is getting faster all the time [3] so replacing the inter-datacenter lasers can quintuple bandwidth.
There have been some interesting 'escapes' of course, once when one of the data centers became unavailable and all the YouTube traffic started coming from another one very far away serving a very large local population, and latencies went up but throughput didn't change, I know a couple of network administrators did a double take on what they were seeing :-)
Good question. I don't know. But I'll share a bit of speculation and you can tell me if there are holes in my reasoning.
With out going into specifics there was a lot of effort at Google going into pushing 'popular' videos closer to the likely place they were going to be played. Any caching system or content delivery system, is by definition limited to the amount of edge cache it has and its ability to predict where that content will be.
One of things putting pressure on that cache is probably Google's efforts to put 'premium' content near possible subscribers. Clearly people who are paying money have a higher expection of people randomly watching the latest cat pratfall. So if you're Google and you're doing more "TV" like things, you may have reasons to reserve some of that cache space for your premium content whether or not folks have asked for it from that part of the network 'yet.'
Another interesting area is in transit networks. Generally the tier-1 networks who provide peering connections have done that on a quid pro quo basis (basically I'll peer your traffic if you'll peer mine). However as web 2.0 companies like Google or NetFlix start creating very large flows, suddenly if you are 'peering' and your peer is putting a LOT of traffic on your fiber but you have relatively little for their fiber, then maybe you start thinking you want to be compensated for that. We talked about Level 3 and Comcast going at it in: http://news.ycombinator.com/item?id=1954077
So an explanation might be that your transit network is 'throttling' its peering relationship. I could easily see someone agreeing to 'peering' terms where they take the traffic they've sent and use that as a 'credit' and provide full rate service for an equivalent amount of traffic coming from the peer, but excess traffic might get throttled rather than being 'free.'
This is a very active discussion in some circles. I would not be surprised to see it come up in any discussion of throughput to 'YouTube' vs local throughput.
That doesn't mean that they have infinite pipes to Google's edge caches, though (you're not exactly the only one pulling data through those pipes :-). Or that they're even pulling their data through the right pipes. I suggest talking to your ISP. If enough users complain to them, and the problem isn't in their network, they'll complain to Google.
It's not just one ISP. I've seen the behavior described by the parent from multiple ISPs in separate countries. It's been driving me nuts for so long that I started collecting information about the connections I've seen it from. Years ago, on 512kbit DSL, it was MUCH better.
PAETEC DS3 (45Mbit/s) in San Diego, CA
Time Warner cable modem in San Diego, CA (20Mbit/s)
Colocation at AmericanIS.net (1Gbit/s via Level3.net) in San Diego, CA
My Youtube connection has been strange for some weeks now. I'm in Brazil accessing Internet from very close to a core network (Terremark NAP). I started to notice the buffering after watching/listening to instant.fm by feross.
It's strange: sometimes a video downloads at 38Mbps - very fast - and some other it keeps buffering resulting in less than 1Mbps. I can see it with NetMeter (I love to always have a graph of my connection speed).
I analysed it a litte bit, all I could see was packet loss and retransmissions. It might be realted to the IP where I'm getting the youtube video, but I still have to dig it more.
Ha! I was wondering that as well. It went from the best way to watch short clips to a very average way. The (big) competitors don't buffer for me, while Youtube does these days.
Now, let’s put these numbers into perspective. According to Ookla’s Net Index, the average speed for a country in Europe is around 12 Mb/second.
That's partly because most people are happy with such speeds (or at least the speed/price ratio). At home I have a subscription with 120MBit/s downstream. Speed tests usually give 130MBit/s during daytime and 140MBit/s at night.
Given that, 500MBit/s downstream is the very minimum one would expect from one of the leading Internet companies, probably located closely to one of speedtest.net's nodes.
More importantly, Google is one of the world's largest ISPs (practically Tier 1) and maybe the world's largest CDN. The speed of their Internet is quite obviously limited by Speedtest or some artificial limiting, not their lines.
Google's offices wouldn't necessarily have as much bandwidth as their data centers. The real surprise here is that Google gives their offices massive connections (presumably to the GBone).
It's not that unlikely. Consider that the offices may have to push significant amounts of code/data back and forth between their offices and their data centers. You probably want to minimize the amount of developer time wasted.
Consider this question: Let us say the cost of an infrastructure at google HQ is X. X is the cost regardless of internet. Now let us say you have Y which is the price of an average inernet connection for google offices. Or you pay 4 * Y to get that connection. Ok lets make it 10 * Y. Whatever. The point is that if it makes your workers a bit more productive (5000 people?) lets say 3%. I think that 3% productivity on developers translates to WAY above the price of that connection.
I suspect too that there just might have been a fast link between the google and speedtest datacenters, maybe because google earlier rented part of the other dc. A 3 ms ping looks like it must be from a bay area dc to SFO. They should really do an avg of times/bw to other speedtest nodes in US.
It's not from a dc to the speedtest node; its from the campus. It wouldn't matter anyway. I work at Cisco and I have gigabit link to the core from my laptop. Problem is the speedtest nodes can't push the traffic fast enough to fill the pipe which is something I regularly do when working with campuses in MA and NC.
Ziggo in The Netherlands. Television, telephone, and internet all in one for 67 Euro per month. They have Internet-only subscriptions as well, but the whole package deal is more attractive ;).
Here's something interesting - Hong Kong's got 1000Mbps, because the company is question is willing to swallow large losses for years to gain market share. Also mentioned is what it would require for the same to happen in the US -
> In the UK the fastest one can go is with Virgin 50MB
. . . well, unless you move to Bradwell Abbey or Highams Park: then you should soon get FTTP at about 100Mbps. An Openreach guy was telling me this afternoon it should just be a few months.
We (Bournemotuth) were due to be getting Fibrecity, 100mb to the home but it looks like they switched on a few roads, dug up a whole lot more then tanked. There original big fanfare plan was to route the fibre through the sewer system rather than digging up the streets again...doesn't seem that panned out too well!
The bandwidth each computer has access to is almost certainly not the entire bandwidth that Google gives its employees access to - if so, the per person bandwidth would be quite low.
Of course, not being a Google employee, I wouldn't know for sure.
Why, exactly, would a consumer NIC be faster than 1Gbps? I'm sure the MV campus bandwidth consists of a ton of fiber directly to the core. They probably give everyone all the bandwidth they can stomach--no one could generate that much traffic, even if they wanted to.
Of course, the bandwidth is limited to gigabit (probably) to the desktop, or less if the OS/Hardware can't handle it. And I'm sure they limit bandwidth for users... imagine a PC infected with a virus, or bit torrent left running, if everyone had full gigabit upload speeds.
That rating is for Google as an ISP. They provide free WiFi to the city of Mountain View which isn't always fast depending on where you are. That probably accounts for the low rating.
It's a bit depressing to think that some day we'll all have connections this fast, but unless we learn to violate the laws of physics, we'll never have single digit latencies around the globe. I like to imagine technology always getting faster, but the speed of light really starts to limit things.
I had a site user in the Netherlands insist that he should be getting 10ms response times from our server in Seattle. I told him that in AMERICA we obey the laws of physics.
This is impressive, but it loses a little luster when you realize their proximity to the testing server.
SpeedTest's SF server is hosted by MonkeyBrains.net. A quick traceroute shows their servers are just one step away from Cogent's SF backbone. I suspect whoever ran this test has fewer than four steps between them and the testing server, and the slowest link between them is the gigabit ethernet port to their desk.
This reminds me of the good old days, the 90s, when I was a teenager and finally upgraded to a ~50KB/s cable modem from my puny 33600 baud modem. That was a game changer, I even ran an FTP server for a while =) It's been getting faster ever since then, but I never again had that feeling of "OMG it's so fast let's download something for the hell of it".
My first job was in that era at my hometown ISP. I got a discount on shotgun modems. We could pirate Doom and Photoshop 3 on the T1. Nothing ever seemed that fast, even now when I have a 35 mbps fios line.
Seriously. I wish I could get 10mbps internet at home...I haven't been able to do so in almost 10 years of living in a very big city and the suburbs of another very major city...
And now for a bit of perspective, here in India I am working with a 512kbps connection, which is the only one I can get without any 'fair use' caps. The latency for the fastest DNS is about 350ms, and on weekends, my bandwidth can be as low as 200kbps.
Are you in a major city or a more outlying area? Is that a wired, or wireless, connection? Even wired access could have wireless backhaul... assuming yours is wired, is that the reason for the slowness, or have they really been mad at work all over India laying conduit with copper instead of fiber?
It is is same all over the country. That's the only plan without any 'fair usage cap'. You might get 4Mbps for $55, but after 15GB of usage, the bandwidth will be cut down for the remaining month.
Also, 3G just launched in India. No more than 50 cities have it, and the maximum they can provide is 32Mbps, again with some fair usage cap!
I think the fastest connection I ever had was back in the day when I worked at CERN. They host(ed?) a European internet hub, so rates were pretty damn quick.
It's easy to get spoiled. 12 years ago my computer was two hops away from an OC-192 and back then I didn't even have anything interesting to try and download (I think I mostly downloaded SunOS ISO's).
To be honest I can't remember. This was 10 - 12 years ago though so what was absurd then is probably pretty standard now. It was a heck of a lot quicker than the T1 line we had in the Physics dept back in London :)
Sorry, mind blowing? Plenty office buildings and universities have faster links than that. I could rent an 1 GBit/s fibre to my office for ~3000 EUR/month.
Also couldn't he come up with a more meaningful test than a questionable speed-tester site...
Am I the only one who thought they meant latency? I like my speeds well enough, but I could use some reduced latency. I guess there isn't much one can do about this, though.
Is this really "mind-blowing"? I can upgrade my personal home fiber to 400/400Mbits for 6000NOK (1100USD) per month here in Oslo. Expensive yeah, but not too unreasonable.
Wow, I did not realize my internet connection at work is actally that good... I ran speed test before some month ago and it was in the same range. (see http://speedtest.net/result/1147935298.png). Back then I though "not bad". But that a post about something like that would make it the homepage of HN...
I find faster internet stops meaning anything once I have reached speeds that allow 100MB downloads in less than 30 seconds, and the connection provides < 100ms latency.
I am not every user, but I personally find that I only rarely download files larger than 100MB, so anything faster is generally underutilized.
Good but nothing special, i live in lithuania and i download files at 140MBit/s .I am using cheapest my internet provider plan and if i want to pay more i could get faster internet.
In my experience, it's only relative to the speed of the server that you are connecting to. I have a 20megabyte (not megabit which is a common misconception) circuit but usually can only get 5-8MB of bandwidth on average from the server that I am connecting to.
We didn't notice much of a difference when we upgraded from 10MB to 20MB for this reason. We have more pipe to download concurrently but doesn't necessarily mean that the "internet" is going to be "faster"
I don't get the point of this post. If the screenshot said MB/s then that was something.
My Apt has a corporate internet freely available for the residents and I always get 500 - 700Mbps.
Any of us could have that if we felt OK paying for an OC3 or OC12 connection at your home or office, right? I'm sure plenty of companies and universities have such service.
probably not, why would they want cached (outdated) content unless a particular site is down? Also, it's not like that content lives in servers in their office, it's in data centres all over the world (presumably)
That doesn't even make sense. For you to get to those sorts of speeds, you need jumbo frame support. While you can get that taken care of on a LAN/WAN or for Internet2, you can't magically make the Internet support it just by sprinkling Google fairy dust everywhere.
On my local lan I am able to get to about 40 MB/sec consistently over a gigabit link, and most of that is limited by the fact that I am reading from my local disk when doing so.
Also, the test probably isn't designed for internet connections that fast which could also lead to funky results.
I have a gigabit network at home and I can get up to 80MB/s, usually over 65MB/s sustained. I tried jumbo frames but due to the variety of different devices, I couldn't get everything working on the same MTU, so I had to drop back to 1500. Still get good speeds.
Consider that Google is investing in undersea cables [1] which nominally provide multi-terabit speeds between continents. And Google is cheap, really cheap, in terms of what they will pay for infrastructure which is why they started making their own switches [2] because even commodity ethernet vendors weren't cheap enough (although these days of state funded cyber armies one also has to protect against foreign made equipment [4])
What is more the amount of data you can send over a fiber is getting faster all the time [3] so replacing the inter-datacenter lasers can quintuple bandwidth.
There have been some interesting 'escapes' of course, once when one of the data centers became unavailable and all the YouTube traffic started coming from another one very far away serving a very large local population, and latencies went up but throughput didn't change, I know a couple of network administrators did a double take on what they were seeing :-)
[1] http://www.google.com/intl/en/press/pressrel/20080225_newcab...
[2] http://gigaom.com/2007/11/18/google-making-its-own-10gig-swi...
[3] http://www.newscientist.com/article/mg21028095.500-ultrafast...
[4] http://www.popularmechanics.com/technology/gadgets/news/4253...