Hacker News new | past | comments | ask | show | jobs | submit login

The regulators are aware of the architectural disadvantages of the SIP. The problem that exchanges face is not to disadvantage the submission of data to the SIP by letting the prop feed applications live on faster hardware (systems & network) as opposed to the application that submit to the sip. The issue I refer to is in this particular article below. The disadvantag occurred even before the SIP received the packet containing the data.

http://online.wsj.com/articles/SB100008723963904435249045776...

Those who truly care about latency will be reading the direct marketdata feeds anyway.

The problem I was highlighting is what definition of "same" should the SEC or the exchanges be held to? When measuring 2 packets with the same information egressing an exchange, what delta is appropriate to be considered the "same" time? Should the delta be within 1 microsecond? 10 microseconds? 1 millisecond? If the acceptable delta is say 10 microseconds, what's the acceptable percentile that the prop data was 10mic faster than the SIP data, or the SIP data was 10 mic faster than the prop data? Or is the exchange in compliance as long as the delta's at the 99th percentile don't exceed 10 microseconds?

Nanoseconds count due to efforts like equidistant cabling that the exchanges employ.

For a web property such as the SEC's, should the push service only push out when the webpage is updated? What is the webpages are behind a load balancer and multiple webservers will synchronize within several milliseconds? web requests querying a webserver that is slightly behind in synchronization could be several hundred milliseconds behind, while the push message has already been out for several seconds.

The article is slicing hairs over seconds, which in the web world isn't a big deal to human consumers. But where machines are consuming, even 1 millisecond is an eternity.




Fair points re: the SIP. However, that submission is the benchmark for fairness is unfortunate. There is no reason it needs to be that way, and fixing it and distributing the SIP (i.e., removing the centralized processor) has numerous advantages that put to rest the latency, fairness and SPOF issue.

In that regime, measuring "same" becomes simple. Measure at the source the venue specific SIP feed (which contains the venue's view of NBBO) and the depth of book feed delta. Over the course of a day, that delta should effectively be 0.

You're absolutely right about the SEC website issue. It's silly to get upset about this since the "web" aspect of the distribution has so many layers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: