I've been to NY4! It really is less magical then these articles make it seem. Yes - the security is a bit imposing. For anyone familiar with datacenters, though, it is pretty run of the mill except for it's tremendous scale.
Worth shouting out to Lucera - I have met the CEO and he is a really thoughtful technologist. They are doing some awesome work bringing AWS-style computing to the low latency space. Some interesting details on their deployment:
http://www.enterprisetech.com/2014/02/20/amazon-cant-catch-l...
Not really. Even in a world where HFT is limited, cross connections provide consistent connections between trading entities. While AWS provides over-the-Internet or "black boxed" networking on a best efforts basis, Lucera's connections are point to point.
If you are at all into this sort of thing( data centers) and you ever get a chance to go, I strongly recommend it.
The amount of tech in this building is just staggering. As the article hints at, everyone who is everyone in finance has a spot there. This has an effect of the more people who are colocated there, the bigger the draw it becomes for the remaining few who aren't currently colocated there.
In the interconnected world of finance, data center cross connects become very important. Its not unusual for a prop trading firm to have 30 different cross connects to vendors, exchanges and sell side firms.
So I guess its not surprising that there have been companies formed with the sole intention of connecting companies that already exist in teh data center. Radianz and Lucera are mentioned in the article but there are usually a half a dozen in each of the main data centers in the new york and chicago surrounding area. as usual Nanex does a decent job of describing the reason why there are so many data centers and they all are atleast 10 miles apart.
The Medium piece is interesting, still not sure where I stand on HFT, the technology is endlessly fascinating I'm just not sure about the ends to which it is put.
I visited a Equinix data center in San Jose and it was an incredible experience. The redundancies and failsafes they have in place are mind blowing. No two doors to the same room can be open at the same time. All doors use handprint readers.
They have a rack of big batteries that fill up a huge room that go on automatically in case of power failure. This will last about an hour until a 10000 HP generator on the roof starts and powers the data center for a week.
Typically the operators will have an agreement so that they can get the generators refueled should the outage last even longer than what fuel they have on premise, so they theoretically could run indefinitely on generator power.
An outage requiring generator power for longer than 1 week might not allow transportation of fuel. Just want to bring that up in case people really did think you can run "indefinitely" on generator power if the roads, bridges, and gas supply network is out, or if there's an emergency state of war or terrorism.
At the 1wk+ territory, it will start to look something like Katrina, where the roads were all blocked by flooding waters and fuel resupply missions required the protection of the National Guard.
I mean you can charter helicopters to bring in fuel, but I think there are more urgent things to power than a datacentre in those cases, for example, hospitals and shelters for emergency humanitarian aid.
> At the 1wk+ territory, it will start to look something like Katrina ...
This reminds me of a blog that was live-updated during Katrina by a data center operator: [1]. They managed to keep everything running through the whole storm, IIRC, but it was pretty dicey -- moving barrels of diesel by hand, staying on-site and surviving with water rationing, losing all but one upstream fiber, etc. Fascinating reading in hindsight.
Where I used to work we had refuelling agreements with multiple vendors. Yes, hospitals and critical infrastructure are prioritized first, but we ran on generators for an extended spell during the Northeast Blackout of 2003[1] (the outage affecting our facility only lasted a couple of days, but due to the utilities' appeal to reduce load on the grid until it was fully functional we decided to run on generators for over a week).
Edit: One of my fond memories of that time was actually greeting the refuelling truck and directing it to the point where he could plug a hose into a pipe on our building and start pumping.
If terrorism, what stops someone from renting cabinets, bringing in "servers" loaded with explosives, and blowing the place up? Or even a molotov thrown at some of those big fibre entrances. Sorting that out, I'd imagine, would take a while.
Sometimes it feels like the opposite. Any day that has the slightest hint of holiday to it is "markets closed! remember no work tomorrow! hooray for national silly lala day!"
These days, I'd hope they can failover to standby hosts in "IL#" or "VA#" or another datacenter in a nearby region.
The DR/BC space has become pretty robust, and with the right vendor, a datacenter loss can be recovered from simply by powering on fully-replicated VMs in another datacenter.
Heh - I remember back in '01 or '02, being let into the PAIX DC and left alone for around 90mins with a backpack full of hard drives and a toolkit.
It was the cheapest/fastest way for us to get our "big" database from Sydney to the US - hop on a plane with about 9kg of hard drives as carry-on. Never underestimate the bandwidth of a 747 full of 340 Megabyte hard disks ;-)
Equinix loves the flash interior decorating but it doesn't do anything for your uptime or latency. Datacenters really aren't that interesting, maybe the "meet me room" have some neat looking network gear but by and large DC are visually monotonous. Question - aside from having a high concentration of wall street firms colocated there what is interesting about this? What innovations are they making?
The funny thing about all the security - the man traps, the palm reader, the alarms systems there is still usually some security guard at the front desk who is often napping. Not that I blame them but its kind of laughable.
Equinix is really a top tier provider. I have some space down in San Jose SV5 and it's pretty awesome. If anyone is interested in a colo swap let me know. I have a few units free and would love to barter it for some space maybe a bit closer to SF.
Also just for kicks I requested a quote for the SV5 center a few months back. They are roughly 3.2k / month for a full rack [1]:
What makes them a "top tier provider"? They provide power and cooling and rack real estate. They don't have a backbone. There's plenty of carrier neutral facilities that provide the same for cheaper. They all provide similar SLAs these days.
Any GPS clock you are going to find in NY4 is going to be sufficiently sophisticated to check the GPS signal against its internal oscillator in order to check for validity. It can fall back to the internal oscillator if the GPS time looks bad.
The other reason this is unlikely to be a problem is that network time protocols break all the time (oops someone did a maintenance and broke sync with the master clock). Lots of real world trading code has sanity checks on the time.
It's the same problem. Assume my GPS clock is synced to actual GPS. You turn on your spoofer and now my clock sees that the GPS source is running faster or slower than it should be so it falls back to the internal oscillator.
Those GPS driven clocks would be marked as falsetickers and thrown out during the pruning process. I would hate to have my plan to create financial chaos blocked by the presence of one Symetricom box with a CDMA card and or off site clock associations.
Who needs a spoofer when you can make a jammer or emp? (don't do this, I have a friend of a friend story of a guy who did this in uni and got visited by men in black suits, the professor almost got fired)
What tremendous overhead Equinix is taking on with their facilities and clientele. The Bloomberg author mentions their annual filing[1] which discloses that they have a net income loss and 'substantial debt'.
The numbers I saw were positive. It was over $100mil in 2015 with projected $10 in 2016. Revenues are huge. Im a bit tired so might have missed the negatives but looked good at a glance.
Wow, I remember this project! I was involved in the Electrical commissioning for the first phase of the buildout. I Remember the 2 hour car rides at 5AM to get there. One night I had to stay up overnight doing a UPS load-test there, was cool walking around with nobody in sight, boring otherwise.
Worth shouting out to Lucera - I have met the CEO and he is a really thoughtful technologist. They are doing some awesome work bringing AWS-style computing to the low latency space. Some interesting details on their deployment: http://www.enterprisetech.com/2014/02/20/amazon-cant-catch-l...