Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly, the exchanges have been under fire from the SEC in regards to the dissemination of marketdata. Are the exchanges sending out data from their prop feeds at the same time it is submitted to the SIP?

Well if it is measured at the 1-second or 1-millisecond level, yes the exchanges are in compliance. At the 100-microsecond, 10-microsecond and nanosecond level, perhaps the exchange is in compliance at the 50th percentile.

It will be interesting to see if the pressure on the exchanges changes course as they empathize with the same type of dissemination problem.




Exchanges do send data to both the SIP and direct data feeds at the same time. The problem is that the SIP has an intrinsic disadvantage due to the architecture and technology used that guarantee SIP market data will always be slower than direct market data.

I'm hopeful that, in time, this will be fixed. Not so much because I believe the latency induced by the flawed SIP architecture is material to SIP subscribers, but instead for 2 reasons:

1. The very same architecture that adds latency makes the SIP a SPOF in our market system and as the NASDAQ Tape C outage showed, it can really suck when the SIP doesn't work;

2. The PERCEPTION of unfairness is much more harmful than any actual harm done due to the SIP/direct latency delta. Fixing the SIP can directly correct the source of the perception of unfairness and bring some credibility to the market place and its governance.


The regulators are aware of the architectural disadvantages of the SIP. The problem that exchanges face is not to disadvantage the submission of data to the SIP by letting the prop feed applications live on faster hardware (systems & network) as opposed to the application that submit to the sip. The issue I refer to is in this particular article below. The disadvantag occurred even before the SIP received the packet containing the data.

http://online.wsj.com/articles/SB100008723963904435249045776...

Those who truly care about latency will be reading the direct marketdata feeds anyway.

The problem I was highlighting is what definition of "same" should the SEC or the exchanges be held to? When measuring 2 packets with the same information egressing an exchange, what delta is appropriate to be considered the "same" time? Should the delta be within 1 microsecond? 10 microseconds? 1 millisecond? If the acceptable delta is say 10 microseconds, what's the acceptable percentile that the prop data was 10mic faster than the SIP data, or the SIP data was 10 mic faster than the prop data? Or is the exchange in compliance as long as the delta's at the 99th percentile don't exceed 10 microseconds?

Nanoseconds count due to efforts like equidistant cabling that the exchanges employ.

For a web property such as the SEC's, should the push service only push out when the webpage is updated? What is the webpages are behind a load balancer and multiple webservers will synchronize within several milliseconds? web requests querying a webserver that is slightly behind in synchronization could be several hundred milliseconds behind, while the push message has already been out for several seconds.

The article is slicing hairs over seconds, which in the web world isn't a big deal to human consumers. But where machines are consuming, even 1 millisecond is an eternity.


Fair points re: the SIP. However, that submission is the benchmark for fairness is unfortunate. There is no reason it needs to be that way, and fixing it and distributing the SIP (i.e., removing the centralized processor) has numerous advantages that put to rest the latency, fairness and SPOF issue.

In that regime, measuring "same" becomes simple. Measure at the source the venue specific SIP feed (which contains the venue's view of NBBO) and the depth of book feed delta. Over the course of a day, that delta should effectively be 0.

You're absolutely right about the SEC website issue. It's silly to get upset about this since the "web" aspect of the distribution has so many layers.


In regards to the perception of unfairness, most of that should be a non-issue as of Q2 2014.

http://www.utpplan.com/DOC/Q2%20US%20Consolidated%20Tape%20D...

http://www.utpplan.com/DOC/Q1%202014%20U.S.%20Consolidated%2...

Tape C average SIP latency from Q1 2014 to Q2 2014 went from ~1ms to 40-50 microseconds. That's about 20x faster. 40-50 microsecond average is about the technological limit for what these systems can do unless one resorts to FPGAs. Even then, the improvements to be gained is to take a 40-50 microsecond system and make it into a 5-10 microsecond system.

The best network switches today with cut-through propagation have port to port latencies of around 200ns. These things are pretty much approaching the speed of light.


Those latencies aren't well defined, and likely representing input to output side processing. Also, they are in the .50ms range, which is 500us, not 50us. What isn't captured here is the significant added transit delay due to the forcing of a centralized processor. They are also averages, which are largely useless when dealing with market data. Let's talk about 95 and 99th percentiles. It's the bursts that kill you.

The perception of fairness as it relates to the SIP will always be an issue (even if it isn't a practical issue) as long as there are "two highways" if you will for market data. There is no reason it needs to be this way. Decomposing the SIP solves all issues and embraces the facts that already exists today: the SIP is a leaky lock on a distributed market that can by passed with ISOs and that there is no such thing as the NBBO because NBBO is entirely relative to point of observation.

Doing so, however, would require the venues to give up a major selling point for their lucrative direct data feeds. Not likely to happen.


>Well if it is measured at the 1-second or 1-millisecond level, yes the exchanges are in compliance. At the 100-microsecond, 10-microsecond and nanosecond level, perhaps the exchange is in compliance at the 50th percentile.

And once you hit the <1 millisecond level, it's hard to even get reliable measurements, which makes compliance for "same time delivery" really really freaking hard.

Keep in mind, when you're talking about the nanosecond level, you're at the point where the length of cabling between the systems matters.

Sure, there's a lot of new tech coming out, especially with using GPS to synchronize clocks, but it's still a major issue.


It's a hard problem, but it's not insurmountable. Most places are getting these measurements by using PTP with hardware timestamping. Solarflare has NIC offerings where the packets are hardware timestamped on the wire regardless of the queueing that occurs on the internal socketbuffer with that data being available through several special metachannels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: