Hacker News new | past | comments | ask | show | jobs | submit login

But does it work? It wins the most buggy database servers I've ever used in 20 years award.

Is writing less than 20MB/sec of data something to brag about?




We're using it with GitLab.com and we like it. 0.9 didn't work at all due to the volume but with 0.10 everything is functioning OK


A full cluster or a single node?


We currently run InfluxDB 0.10.0-nightly-614a37c (I have yet to upgrade it to the stable release) on a single DigitalOcean instance with 8GB of RAM with 30-something GB of storage. The previous stable release (0.9 something) didn't fare very well, even after we significantly reduced the amount of data we were sending (we were sending a lot of data we didn't really need).

Switching to 0.10.0-nightly-614a37c in combination with switching to the TSM engine resulted in a very stable InfluxDB instance. So far my only gripe has been that some queries can get pretty slow (e.g. counting a value in a large measurement can take ages) but work is being done on improving the query engine (https://github.com/influxdb/influxdb/pull/5196).

To give you an idea of the data:

* Our default retention policy is currently 30 days

* 24 measurements, 11975 series. Our largest measurement (which tracks the number of Rails/Rack requests) has a total of 28 539 279 points

* Roughly 2.3 out of the 8 GB of RAM is being used

* Roughly 4 GB of data is stored on disk

This whole setup is used to monitor GitLab.com as well as aid in making things faster (see https://gitlab.com/gitlab-com/operations/issues/42 for more info on the ongoing work).


Thanks for the information. :)

Unfortunately, I need 2+ instances with Active/Active or failover to seriously consider anything for production which is why I've not touched InfluxDB beyond some light testing.


Am I correct in assuming that you got to the 20MB/sec number by taking our 3 bytes per point number times the number of data points?

The input is actually much higher than that. Data points over the network look like this:

cpu_idle,host=serverA,region=uswest value=23.0 1454617920

That's actually a toy example. Most real data would probably have more tags and longer measurement names. Obviously that's much more than 3 bytes.

We persist that to disk in a write ahead log (WAL) and then later we can do compression and compactions on the data to squeeze it down to 3 bytes per point. However, that takes more than a single write against the disk to get to.

Run a load test against it. See how much network bandwidth you can use. See what your HD utilization looks like. My guess is you'll be surprised by what you see.


And about the bugs...? My experience with 0.8 and 0.9 was somewhat sub-par. I personally am awaiting 1.x and it wouldn't surprise me if others were too.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: