Hacker News new | past | comments | ask | show | jobs | submit login

The Bloomberg API’s I’ve worked with were… really not great. At one point our Bloomberg rep started talking up their new “REST API” and how excellent it was. Well I made the mistake of saying I’d give it a shot.

Turns out each request for data didn’t return the data. It returned another URL with your request that you would then have to poll. This URL was like a multicast feed and would send out messages about the data that everyone would request and you would have to ignore and filter out messages that didn’t match your request ID.

At some point you’d get a message that did match and then it might tell you that the data is unavailable or something was wrong in your request or that you can get your data now. But it would be in this insane proprietary format and you’d have to write your own parser to parse it line by line and get into something resembling a CSV.

This was marketed as the easy to use API for data scientists as well. We didn’t purchase access after the trial ended. For personal work I just use Polygon.io now.




We are still using requests over the SFTP server, but I guess rather sooner than later that will change. However, when it comes to Refinitiv, I can't complain about the REST API for Datascope Select and TickHistory.

Would you know how good the quality of the options data at polygon.io is?


Yeah, seems to be pretty typical still. Hopefully it changes sooner rather than later.

That's interesting to hear about the Refinitiv API, are there any public docs for it and do you know if they provide client libraries for Python and R? Obviously data quality has to be our number one priority, but I'm so fed up with writing a bunch of boiler plate code to interact with the large data providers, at this point my second priority is just ergonomics and that means a decent client library that's dependency free (looking at you Bloomberg with your blpapi package) and has support for batch and streaming data.

From what I understand it seems like upon initial release, the Polygon folks did have some data quality issues, but based on my own experience and anecdotes in /r/algotrading they've significantly improved. Can't really provide anything more concrete than that though, sorry.

Another provider I like is Intrinio, they have some pretty interesting methods for automating QA, esp for some of the more unstructured data like 10-K's and 10-Q's.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: