Hacker News new | past | comments | ask | show | jobs | submit login
We compress Pub/Sub messages and more, saving a load of money (lawrencejones.dev)
46 points by kiyanwang on Jan 3, 2021 | hide | past | favorite | 27 comments



Hey all- I'm the author and just noticed this post. Thanks for the repost!

If you're interested, there was a nice discussion in /r/devops about this the other day: https://www.reddit.com/r/devops/comments/kmltbx/how_we_compr...


Related tip: anywhere you're looking to deploy compression in a backend, consider zstd (https://facebook.github.io/zstd/). I've found it suitable for streaming compression (Gbps+ flows without exorbitant CPU usage), storage compression (high compression ratios at higher levels; fast decompression), and found implementations available to every language I've needed. It comes out strictly better than gzip ~across the board in my experience, and should be the default compressor to choose or start evaluations from if you have no other constraints.

I don't think it's yet deployed in browsers, so I'm restricting my recommendation to tools and backends. IIRC brotli is worth considering for browsers, but I haven't deployed it myself.


I wanted to try zstd for a rust based application running as on Android. The cross-compilation and integration left me in pain and I am stuck with `zlib` which is also great IMHO.


I'll note that zlib and gzip are approximately the same, with the same algorithm under the hood (`deflate`) but different framing. But if it's working well for your use-case, that's all that really matters - especially over having to deal with C interop.


So basically this is once more a case, where cloud providers promise you everything then corner you into unforgiving situations where you have to spent most valuable thing (engineers' time) to avoid associated costs with particular vendor lock in, in order to have service continuation.

60 TB is a lot, but we can all agree that is surely doesn't warrant -/+ 13000 USD one-way trip price tag.

This statement is in particular worrisome: "Google Compute Engine network rates (we’ll ignore these, as they get complicated)"

So you are basically buying a black-box that has an ability to drive your business into a ground if you are not careful enough.

Yes, I am aware you can argue and plead for mercy once you get the bill, but this is not the way: they wouldn't be so keen in forgetting your cost if it was making them a real expenditure: I was doing a napkin calculation for AWS and I found it to be around 10x more expensive than DIY solution, which means that from 10 suckers, only one has to pay for them to make profit.

The more I read about cloud providers the more I am convinced that their major business is based on the fact that they have to get hardware and software for their core business and since they cannot scale it precisely it is more lucrative for them to just try to sell "solution" to suckers for 10x and get rich in the process.


Since you’re on btrfs, a copy-on-write file system, you should also investigate disabling full page writes [1] for Postgres WAL. I disabled it for our Postgres cluster on ZFS and got a nice reduction in write volume.

[1]: https://www.2ndquadrant.com/en/blog/on-the-impact-of-full-pa...


Genuine question from someone from an entirely different world - why on earth do you have 10 billion log entries? What is in them and do you ever do anything with them that requires you to store so much data rather than just a representative subset?


Author here! These 10B log lines are from the last 60 days of activity from https://gocardless.com/ systems.

It includes:

- System logs, such as our Kubernetes VM host logs, or our Chef Postgres machines

- Application logs from Kubernetes pods

- HTTP and RPC logs

- Audit logs from Stackdriver (we use GCP for all our infrastructure)

> do you ever do anything with them that requires you to store so much data rather than just a representative subset?

Some of the logs are already sampled, such as VPC flow logs, but the majority aim for 100% capture.

Especially for application logs, which are used for audit and many other purposes, developers expect all of their logs to stick around for 60d.

Why we do this is quite simple: for the amount of value we get from storing this data, in terms of introspection, observability and in some cases differentiated product capabilities like fraud detection, the cost of running this cluster is quite a bargain.

I suspect we'll soon cross a threshold where keeping everything will cost us more than it's worth, but I'm confident we can significantly reduce our costs with a simple tagging system, where developers mark logs as requiring shorter retention windows.

Hopefully that gives you a good answer! In case you're interested, my previous post mentioned how keeping our HTTP logs around in a queryable form was really useful for helping make a product decision:

https://blog.lawrencejones.dev/connected-data/


Thanks for the response, really interesting to see how this stuff is used.


Are you also using Google Tracer? I haven't been to get any traces to work for ages with Node.


>>> why on earth do you have 10 billion log entries?

It's pretty low volume actually. A small company with < 100 developers and servers can generate a billion logs over a few weeks.

Normal logs from the system, syslog, applications, databases, web servers... nothing fancy really. It's common practice to centralize all these into ElasticSearch or Splunk.

Their scale of 10 billion logs 60 TB means they're a regular small to medium company.


You've nailed this!

This logging system was for all https://gocardless.com/ systems. We're a B2C company which means we have different economies of scale than many scale-ups of our size, but you were close with your guess:

Currently 450 people worldwide, ~150 in product development, of which ~100 fulltime developers.


This seems suspect, that works out to approximately 25 log messages per developer per second assuming a 10 hour work day.

I work in a tightly regulated industry (finance), and even my company doesn't have a need to log 25 messages per second per person.

Is anyone else able to validate this claim that regular small companies log this much data?


Anytime its something ridiculous like this, I assume its for compliance. A few industries require all info to be retained for 7 years.


Beware of on the fly compression, it adds to network latency if you aren't careful. It's an important metric that gets overlooked in many articles on compression.


did you use any kind of message framing? Got bit by this at a previous job where we needed to change the message format to improve compression. Wound up figuring something out, but would have been easier if we had reserved a byte for versioning.


I'm not quite sure what you mean by message framing.

If you mean marking messages as having been compressed, then absolutely yet. The Pub/Sub messages were tagged with `compress=true|false` so we could update downstream and upstream independently.

If you mean buffering log messages into a batched 'frame', then yes we did do this. We were taking about 500 log entries and compressing them into a single message, which was part of why the compression was so effective.

If you mean something different, then I'm at a loss and very interested in what you meant by the term!


Compression has a CPU time cost, though. You spend less on storage but use your CPUs more. Is the extra load from compression cause your cluster to autoscale? if so, you may not be saving money.


FTA: "This post aimed to cover a scenario where the cost of compression, in both compute resource and time-to-build, was significantly outweighed by the savings it would make."


Author here! A few others have pointed this out, but to restate: in our situation, compression was costing us almost nothing.

60TB of logs was 60 days worth of retention, so 1TB a day. That means we process about 11MB/s on average, peaking at 100MB/s.

A single CPU core can manage 100MB/s of compression, so if you assume we compress and decompress in multiple places, let's say we're paying about 4 CPU cores constantly for this.

That's a pretty worse case scenario, and it would cost us $37.50/month on GCP for those cores, in order to save about 100x that amount.

The takeaway (for me at least) is that compressing an in-flight transfer is almost always worthwhile if you're working in the cloud and the transfer will eventually be stored somewhere. The economics of CPU vs the total amount of data storage cost is a no brainer.

Hope that makes sense!


Thanks for the clarification.


my intuition is you can save cpu time when compressing before sending over the network (and certainly wall time).

a quick test copying a 24M file (with similiar compression ratios) to s3 showed a 6% decrease in cpu time when piping through gzip.


Depends on selected compressor, but yes, you can. I've definitely observed zstd-1 to be a net savings, where compression/decompression costs were offset by pushing fewer bytes through the RPC and network layers - and this was only from observing the endpoints, not even counting CPU on intermediate proxies/firewalls/etc.

I wouldn't normally expect gzip to be a net savings (it's comparatively more expensive), but depending on compression ratio achieved and what layers you're passing the bytes through, I'd definitely believe it can be in some contexts.


Data sent to S3 is usually hashed (depending on authentication type) in addition to being transport encrypted; I imagine the majority of this cost here is the encryption of a larger payload (which many would consider indispensable, but I point this out because I do not generally assume this when I merely consider "over the network").


You assume incorrectly. SSL encryption is in the order of 1 GB/s on a recent CPU with AES instructions (anything from this decade).

Gzip is in the order of 10 MB/s with default settings, down to 1 MB/s with the strongest compression setting. It's really really slow.


GNU gzip the application is slow on the order of 10 MB/s because of how it does file IO, but the DEFLATE algorithm that gzip is based off of is much faster than 10 MB/s at the default "level 6". For example the slz implementation of DEFLATE compresses text at 1 GB/s [1]. Even the fairly common zlib implementation can compress text at close to 300 MB/s.

http://www.libslz.org/


DEFLATE at level 6 is really doing 10 MB/s, doesn't matter if you're using gzip or zlib or another library.

slz is closer to level 20 (if there was a level 20). It's fast but the compression ratio is meh. You're better of using lz4 or zstd.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: