Thank you for your kind comment! We would consider offloading to a GPU when the time is right. It feels right to fully utilise CPU capabilities before jumping to GPU.
In the future, we will release other products with features geared towards enterprises. We don't think that having our code open will harm the business.
On the contrary, it will hopefully help us gain some popularity by allowing any developer to access latency levels only seen in trading. We hope that QuestDB will allow them to stop worrying about database performance bottlenecks.
Window functions are in draft, we will release them imminently. We will support moving averages. In fact we plan to support generic multi-pass and window functions. Having multi-pass will allow you to do things like `select sum(x -sum(x)) from tab`.
those things are so useless if your time points arent equidistant (ie, you don't care about the last 100 rows, you care about the last 5 minutes). they basically force you to use very slow correlated subqueries. please do something better.
QuestDB | Head of Dev Rel / community manager |Remote, London/SF
QuestDB is a an open source SQL database to process time-series, faster. The founding team comes from low-latency trading; fast software and timestamped data are part of our DNA. Through QuestDB, companies can harness the power of real-time and big data processing in a wide array of use cases and industries, from financial data to IoT and connected cars. We are funded by leading European VCs.
Helping developers solve their problems is at the center of what we do. We are looking for a head of developer relation to spur developer adoption and usage, grow the brand and empower developers with faster software that kicks Moore's law (or rather the end of it) in the teeth.
It is a good strategy (and an interesting product). The execution side might be complex for retail traders. Execution is particularly important because although you get a cheap structure (selling one leg, buying the other), you end up crossing the spread twice.
This sort of trade is usually done OTC or alternatively using a contingent algorithm (wait passively on one side for one leg, and fire the second leg automatically as the first leg is filled) which allows going from paying 2X spread to 0 - but the latter has the tradeoff of taking longer and potentially missing the opportunity.
I know they are sometimes called like this, but I find the naming "credit spread trading" confusing. Are these strategies different from collars? I traded credit in the past, and my immediate understanding of "credit spread trading" in this context is shorting bonds of Boeing and buying treasuries (or trading US rates) for example. It would also be a valid strategy in this case that would use credit and rates instrument as a vehicle rather than equity instruments, but not accessible to retail traders.
I don't think it's fair to say "A is faster than B" like in the above comments based on the order they appear in a list that mixes GPU clusters and laptops results. The author of the benchmark does nothing wrong deontologically, but the results table seems ordered by time and some people jump to quick conclusion or use it as a way to rank performance when it's not appropriate.
Former options trader here. The reason its called stupid flow is that 99.999% of retail traders use them wrongly and don't understand them. That said, you can educate yourself and use options properly. But don't let any broker or retail educator "educate" you, they don't want you good and have huge conflicts of interests. Get education from technical articles and by learning the underlying maths and pricing dynamics, particularly the relationships between implied volatility smile and price distribution.
One of the main reasons they are dumbly traded by retail has to do with the ban of CFDs in the US. As retail investors want leverage (attracted by a quick buck rather than making money on the long term), options provide an alternative, and retail brokers push them to customers.
But there is a massively misunderstood dynamic: time. You don't only have to be right. You have to be right by a certain time. Another misunderstood dynamic is risk management. Options are useful as part of a portfolio, but if you use them only as a means to get more leverage on your directional portfolio, you will end up like all traders losing money that don't understand why. Yet the reason is simple: you took too much risk.
Good stuff. IMO, no one worth less than $1M should even be trading symbols, much less options. The "poor" should pour their money into low-fee Target Retirement 20XX funds and let them sit.
Comments about events "priced in" are absolutely right. With a little work, you can make it transparent and use it to your advantage.
In simple terms, the price of an option at a given time is equal to the product of what you can expect to gain from it (its payout) and the likelihood of each payout (the implied distribution of the asset at maturity). The interesting consequence is that you can reverse-engineer option quotes to derive the "market-implied probability distribution of a given asset at expiry". You can then compare this to your expectations to enter positions (for example if you think the market overpriced/underpriced a given event, trade against it with options).
You first need to calculate the implied volatility of options quotes (both calls/puts on both bid/ask) which requires you to correctly adjust your forward, i.e divs and rates to obtain put-call parity. If your forward is wrong, your implied volatility curves will look off (for example put bids above call asks) which means you have the wrong rates or dividends expectations.
Once you computed the implied volatilities and are happy with your forward, you can fit a curve between your 4 series (call ask, call bid, put ask, put bit). This is your implied volatility mark. You can then use this volatility mark to derive an implied probability density. There is a simple example of how this is done here:
This is actually really useful when you are trying to manage your risk for a given event. It also has interesting dynamics. Back in 2014 for example, we were worried about our risk on PBR US (a massive petro company with strong political links) ahead of Brazilian elections. By using this method, we found out that the implied distribution of the stock was bimodal, each mode corresponding to one outcome of the election. This gave us an idea of how much the stock could move either way and helped us cover the risk.
If you would like to see whether a given event is indeed priced in as you would expect, you can use this method, bearing in mind there is a timing element and you should seek the option expiry just after the event or horizon you are considering.
One last point that is important to consider is that this is “market-implied distribution” and does not imply a future behavior for the asset in question. It merely gives you an idea of the expectations of actors at this moment. Moreover, it is highly dependent on your inputs (dividends, rates and how you fit your volatility curve between bid/ask options quotes, particularly on the wings).
QuestDB | Head of dev relations | San Francisco | Full-time
QuestDB (http://questdb.io) is a startup building an open-source time-series database that makes nanosecond-latency performance accessible to everyone. By focusing on software core software efficiency we are able to do two things. First, we help developers build true realtime applications without requiring complex technology vendors. Second, we enable more throughput through the same unit of hardware, which significantly reduces costs at scale.
Helping developers achieve their goals is at the center of what we do. Your role will consist of engaging with the user base and orchestrating the growth of our open source communities by reaching out to developers and help them solve their problems.
This is an open-ended role with a substantial degree of autonomy. You will have full liberty to develop outreach channels such as promoting content to spur adoption, planning events, marketing, PR and social media initiatives, as well as any additional direction you will deem relevant.
If you are interested, please email careers@questdb.io.
Hey, generally users like to define datasets so it looks like the data they will eventually use for their application. As such they are all bespoke and I don't know of any "standard".
But generating bespoke random data quickly is easy. To generate such datasets, you can try QuestDB, which allows you to do it with SQL. You can define patterns (for example for timestamps) or constraints. Feel free to try and to slack us if you need help.