I havent read the patent but I can say this is an exceedingly common (I'd probably say standard) strategy. I can only assume the atomic clock bit is what's novel.
Using ntp or whatever the new variant is is also standard, which as I recall can hit sub-microsecond consistency on a wide area network with good hardware. So yeah, not new.
It doesn't require hardware, though it really improves the performance. I've implemented 1588 a few times and was able to achieve ~ <30ns accuracies when using hardware timestamping. Also note that there are more and more MACs and PHYs these days that offer HW timestamping.
With software, it really depends on how deterministic your packet handling and timestamping routines are (or how deterministic the OS scheduler is). I was able to achieve accuracies of less than a microsecond on a Linux system, but it was "touchy".
For reference, there's an open source implementation called "ptpd" and "ptpd2".
Little know fact, you can use the GPS constellation to get atomic level precision time nearly anywhere on earth. Using an Atomic clock is purely to show off to investors/a red herring.
Exactly. GPS is by far an easier way of synchronizing clocks to an atomic reference compared to any master-slave networked approach, especially in situations where we're talking about a small number of stationary, expensive servers which are far apart.
PTP is mainly useful for situations where you want to synchronize many cheaper slave devices to a common master and it's not practical for each device to have its own GPS receiver, or for situations where the use of GPS isn't practical (or is prohibited) and you're more concerned about coherency between devices rather than traceability to a primary time reference (e.g. a telemetry network on an aircraft). Although, generally, the grand master of a PTP network is synchronized to GPS anyway.
Of course, you could achieve actual phase locked synchronization down to the clock cycle with something like SyncE + PTP, but with GPS, you need not worry about issues with asymmetry regarding messages transmitted over the internet (PTP needs to be routed through PTP capable switches which compensate for the residence time and was really meant for LANs).
I guess it really depends on your constraints, but if you’re able to use GPS, that would definitely be my first pick when it comes to synchronizing multiple devices.
The latest UBlox timing receiver (LEA-M8F) provides a PPS which is accurate to less than 20 nanoseconds (to the UTC second) and its built in oscillator has a typical holdover spec of 0.025 PPM (25 nanoseconds per second). If you want to get fancy, you can use the PPS to discipline an OCXO and get an even better holdover spec to handle the situations where your receiver may lose lock (which is unlikely if you’re able to have an antenna).
Basically, the accuracy of the UBlox GPS receivers (just an example since they're pretty cheap; I found a board for ~$150), is equal to or better than that of a usual PTP link (without SyncE), so you might as well just use GPS on each device if you can. It is simpler, IMO.
However, note that comparing GPS to PTP isn't necessarily valid since PTP is purely a method of conveying timing information between devices, and is not a time source itself, where as GPS is both a method of conveying timing information as well as a time source. In other words, a PTP network still needs a master device which itself is synchronized to (or is) an atomic clock.
The errors you see in your mobile phone's positioning are due to signal problems (reflections, etc) and the relatively limited capabilities of the cheap GPS radio in your phone. A decent GPS receiver with a well-positioned antenna will get a highly accurate clock.
Speed of light is 299,792,458 m/s. So if GPS is off by more than 1/10,000,000 you can't get accurate within 30 meters. Having used a GPS they are better than that, thus the clock must also be at least that accurate. Of note, stationary stations can get into centimeter precision which imply's vastly higher accuracy.
Except that the state of art in HFT is sub-microseconds
"London-based trading technology company Fixnetix said Tuesday it has the world’s fastest trading application, a microchip that prepares a trade in 740 billionths of a second, or nanoseconds." (WSJ, 2011)
That's one of the factors. Generally the optimisations that happen first are on network path length / network equipment induced delays, as there's relatively cheap and quick gains to be made. The bigger delays are invariably in your applications that are processing or generating data, which are more costly to optimise.
That said, the Fixnetix stuff is only talking about one aspect of what's involved, and about as representative of reality as Cisco's published WARP speed figures in their Nexus 3500 range.
The atomic clock bit isn't novel at all. I've worked for HFT firms the past 9ish years and using hardware timesources is 101 level intro to electronic trading.
The actual patent talks a lot about NIST GPS clocks, and not so much about atomic clocks. Never trust a headline. Gell Mann Amnesia Effect in full play here.
Sure. You can't get roof access (for a gps antenna) or a vendor ptp feed in every exchange. In those places, you get a rubidium decay stratum 0 timesource.
Not all businesses can afford this, but it is only 4 or 5x the price of a normal GPS timesource, which is affordable for the right people.
The idea doesn't help the hft'ers. It takes an order, secretly transmits it to computers each as near to the major markets as possible, with instructions so that the computers submit the trade offer at precisely the same time.
The hft'ers can't make money since they can't outrun trade offers that are synchronous across all markets.
As someone with zero domain knowledge, why aren't the exchanges already doing precision timed order processing? That just seems like it's should be a standard feature across the board. The broker sends buy/sell orders with planned execution times to all the required exchanges and the exchanges sit on the orders until the designated time.
Exchanges do time ordered processing on their own exchange (with different levels of precision). I don't know of any exchanges that offer execution time as a constraint, but new order types can be created if they were deemed valuable (it takes SEC approval).
That said, it wouldn't alleviate the issue necessarily. If firms detect problems in the clock sync between exchanges you are right back to the same problem, and now you've added a complex bit of tech that requires a bunch of competitors to agree on.
This seems, to me at least, to be one of those problems that it is better to let the problem surface than to try to alleviate with an abstraction layer that is leaky and error prone.
But this technology is patent is just implementing the exact same thing at one layer removed from the exchange. You still have to time the orders and you still have to keep the timed orders confidential. To me, using a 3rd party to do this instead of having it as part of the base system is... silly I guess.
What? It generally only costs a few thousand dollars per month to colocate servers next to exchanges.
Arguably, it is more fair now than it ever was in legacy "open outcry" markets where the size of the floor was fixed and if you didn't get a spot on it you weren't able to compete.
Disclaimer: I've worked in HFT 9ish years (10 soon)