Networks are a fundamental part of the Google infrastructure. Ask yourself a couple of questions and then you can answer the questions about Googles motivations.
1) Is there any switch that does only what you want? Have any of the features you didn't need interfered with your uptime?
2) In a 48 port high performance switch which is more expensive? The switch or the cables? Why?
So if you 'do the math' the obvious answer sort of pops out.
Disclaimer: I worked there and agreed not to discuss any platform technologies until such time as either Google talked about them publicly or they were disclosed by other means not related to me.
To be honest I don't know. I do know that Google has a tendency to brag about things after they remove them from service so from a practical standpoint its not 'forever' but when I get home I'll check the language.
Hasn't google already talked a bit about this ? They build (and possibly buy some) switches that's OpenFlow enabled, so they can among other things control the forwarding of many switches from a central control plane. e.g. http://www.wired.com/wiredenterprise/2012/04/going-with-the-...
Yes they have talked about things from a software perspective but very little from a hardware/systems perspective. I keep hoping the Open Compute project comes out with a switch design.
Heh, I'm completely baffled why it has Finnish text on it.
From the blurrypic I can make out "Laite on liitettävä suojamaadoituskoskettimella varustettuun pistorasiaan", which basically means "Device must be connected to grounded outlet"
Early 2009. Date lines up with how old the ex-engineer thinks the device might be (3ish years old). So perhaps the Finnish text is explained by this being a piece of Hamina Data Center's original networking gear.
"Ex-Google engineer J.R. Rivers — who now runs a networking outfit called Cumulus Networks — says this appears to be an older switch, something that has been used for about three years or so"
A friend who used to do networking stuff for a global bank had good things to say about Nokia networking gear.
So I guess there are good networking hardware engineers in Finland, and it's not a stretch to imagine Google might have bought some of them up somewhere along the way.
Nokia has been hammered in the past couple of years and has shed[1] and will be shedding more[2] jobs over the next several quarters. Google and other technology companies have probably been circling Finland for a number of months now trying to identify and intake the best talent they can find.
I often hear about Google's custom hardware. Are there any other large tech companies (who are not hardware companies) that build/design their own hardware to run internally?
At Bloomberg, we design almost all of our branded hardware in-house, from the terminals (monitors/keyboards)[1] to the B-UNIT authentication devices[2]. There are also other internal datacenter hardware projects.
BTW, do you know, why Bloomberg switched from the old, more comfortable B-Units (as pictured above) to the new black ones, which dont feature a full fingerprint reader any more (instead you have to slide your finger over a smaller sensor)?
The old design was way more comfortable, faster and produced less errors in my experience...
The quality of scan in the swipe sensors is actually better than the older area sensors and they also cost less and take up less space on the circuit board. Perhaps the enrollment scan you did was not as good as it could have been and you should re-enroll to capture a better fingerprint.
For those who don't know, when folks talk about tech companies building their own hardware, they almost never mean that literally. The manufacturing (and often some joint design work) is farmed out to third parties like Pegatron, Compal, Quanta, Flextronics, Sanmina-SCI, Foxconn, etc). Even Google's Nexus Q, which they so proudly declared was designed & built in the USA, was still not built by Google themselves.
Some, like Dell, Oracle, Apple, and HP are in the hardware design business so no surprises there, others like Facebook, Amazon, and Google have such large hardware needs that a significant amount of cost can be recovered by doing so, and others such as the wall street companies have specialized needs so it makes sense for them as well.
I was asked about custom hardware in an interview once and pointed out that it was pretty straight forward to go to a PC original device manufacturer (ODM) or OEM and say "I'd like an x86 architecture box that has these features ..." and get it built. The only question is whether or not that makes business sense or not.
The only question is whether or not that makes business sense or not.
And unfortunately there's still zero public information on that side of the equation, even from the Open Compute Project. I wonder how many customers are missing out because they don't even know to ask.
Can you say more about this? There is a lot of information about infrastructure costs. Put 'operations decision maker' on your business card and folks who want to share that information with you sort of ooze out of the woodwork :-)
You can 'save' anywhere from 10 to 35 percent on the cost of your computing infrastructure with some investment in customization. This is a combined OpEx + CapEx number since you benefit both from staff costs. Example: If its easier to get a machine back into service for your staff they spend less time on it so they cost less (OpEx) if you've got a larger cabinet that is holding all your machines you don't need individual chassis for each machine so they cost less to produce (CapEx).
For folks for whom their 'data processing' infrastructure is small part of their overall costs it isn't worth it but for companies that live and die on the marginal cost of one more user it's pretty critical stuff to know.
...folks who want to share that information with you sort of ooze out of the woodwork
Yeah, that was my impression. IOW, you can't find out anything unless you're actually buying. I don't have the budget for any of this stuff; I am just curious so I've looked around the Web and found no hard numbers.
The Wireshark capture showing BOOTP was interesting. I don't live in the networking world anymore, but I thought BOOTP had gone the way of Gopher by now.
1) Is there any switch that does only what you want? Have any of the features you didn't need interfered with your uptime?
2) In a 48 port high performance switch which is more expensive? The switch or the cables? Why?
So if you 'do the math' the obvious answer sort of pops out.
Disclaimer: I worked there and agreed not to discuss any platform technologies until such time as either Google talked about them publicly or they were disclosed by other means not related to me.