Hacker News new | past | comments | ask | show | jobs | submit | dianasaur323's comments login

Yeah! CockroachDB is also a really cool multi-cloud DB. That being said, they are really more for transactional workloads, and less purpose built for time-series.

I guess there are always trade-offs in the software world.


You select the public cloud vendor you want your machine spun up on. So no, if AWS has a full outage, it won't fall back to a different cloud. Failover is done at an availability zone level.

Since TimescaleDB is also open-source, if you want that kind of replication scheme, you can always install on VMs across clouds. However, as you rightly pointed out, network latency is a definite concern and impacts the feasibility of RPO and RTO.


One thing to add:

The Timescale Cloud does allow you to do is create asynchronous read replicas across different clouds and regions (with a couple clicks).

You can then "fork" a read replica (at any point in time) and make it a primary to start serving out of that cloud (again, with just a couple clicks).

That's not quite the same as auto-replication/failover between clouds, but getting pretty far there.


I think the quickest comparison is SQL vs NoSQL. We haven't done performance benchmarks against Druid yet, but do know of several users who have switched because they want to use PostgreSQL instead.


Very cool! I haven't been able to find a more real-time data source yet. Thanks for sharing!


Yeah - TimescaleDB comes with a time_bucket function that allows you to group things by minute, and specify a where clause that queries for just the last 5 days. You can build indexes that include the ticker, and also reorder data on disk to optimize how much data you scan on disk. So, TLDR - you should definitely try it! I did some quick googling, and it looks like Metabase supports PostgreSQL, so it should work with Timescale. We would love to hear how it goes!


We haven't done a formal price comparison, since it's actually a bit hard to compare apples to apples since the two databases are architected differently. Definitely something we should consider doing! Thanks for the idea.

Migration wise, I would use Outflux for batch migration and Telegraf to support a live migration (https://docs.timescale.com/v1.3/tutorials/outflux) and (https://docs.timescale.com/v1.3/tutorials/telegraf-output-pl...).


It's more like pay-for-what-you use. You can check out the pricing calculator for more detail: https://www.timescale.com/cloud-pricing

Growing, shrinking, and migrating involve moving to a different instance type, so you have to select a different instance type. That being said, there is very very little downtime (on the order of 3-5 seconds while the DNS resolves)


Thanks for the clarifications, that's helpful.

I wouldn't call it pay-for-what-you-use unless the pricing varies with your actual usage instead of changing when you change plans.


Interesting point of view - it's certainly always a bit hard to find the right verbage that everyone can understand, but hopefully this discussion clarified things!


Last time I used a traditional hosting provider, I could get a new bare metal server setup in under half an hour. I would hardly call them "pay what you use" even though I could start and stop servers and change the plan I'm on and still be only two to three times slower than doing the same on AWS.


Certainly - I've been seeing a bunch of usage based pricing that price on a different metric (like metrics per second) etc.

Regardless, with Timescale Cloud, if you get a machine, you pay the price for that machine for as long as you use it. So I guess to avoid the confusion, we can call this just paying for the machine :)


By the way, I've recently started using TimescaleDB (past month or two) for processing cryptocurrency trading information and I'm liking it a lot so far. I love that I can use Postgres as normal, but have efficient time-based queries.

My first ever test query was to generate minutely OHLC+volume from time,price,quantity trades. It was pleasantly easy to do:

    select time_bucket('1 minutes', time) as minutely, 
           max(price) as high,
           min(price) as low,
           first(price, time) as open,
           last(price, time) as close,
           sum(quantity) as volume
      from trades
    group by minutely
    order by minutely;
https://gist.github.com/danielytics/e9b69933586e00732646e016...


Plus growing, shrinking, migrating only requires a few clicks


How do you do live migrations? Do you shard, then suspend existing queries, and finally redirect?


We spin up a separate instance that matches the type that you want to migrate to, restore a backup and stream the WAL logs. Then, we redirect.


The good ol' event sourcing trick :D


[cockroachdb here] We are big fans of RethinkDB, but also glad to hear that you'll explore CockroachDB. Let us know how it goes, and definitely file any issues / feature requests in our GitHub repo!


Thanks for pointing that out! We will fix that to optional for us :)


[cockroachdb here] Thanks for the great response, bpicolo!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: