IMO they serve similar at a glance, but actually very different use cases.
SeaweedFS is more about amazing small object read performance because you effectively have no metadata to query to read an object. You just distribute volume id, file id (+cookie) to clients.
3FS is less extreme in this, supports actual POSIX interface, and isn't particularly good at how fast you can open() files. On the other hand, it shards files into smaller (e.g. 512KiB) chunks, demands RDMA NICs and makes reading randomly from large files scary fast [0]. If your dataset is immutable you can emulate what SeaweedFS does, but if it isn't then SeaweedFS is better.
[0] By scary fast I mean being able to completely saturate 12 PCIe Gen 4 NVMe SSD at 4K random reads on a single storage server and you can horizontally scale that.
My guess is going to be that performance is pretty comparable, but it looks like Seaweed contains a lot more management features (such as tiered storage) which you may or may not be using.
It’d be neat to use subtrace in an ephemeral pod for debugging purposes, that just runs alongside the regular pod.
For monitoring the network traffic for the whole cluster, the CNI and/or whatever ebpf-based runtime security stuff you’re using (falco, tetragon, tracee) is usually enough, but I can definitely see the usefulness of subtract for more specific debugging purposes. If run as a DaemonSet make sure to add some pod filtering such as namespace and label selectors (but I’m sure you’ve already thought about that).
> use subtrace in an ephemeral pod for debugging purposes
That's a great suggestion. It'd be like kubectl exec-ing into a shell inside the pod, but for network activity. I think I'm going to prototype this tonight :)
> pod filtering such as namespace and label selectors
Yep, Subtrace already tags each request with a bunch of metadata about the place where it originated so that you can filter on those in the dashboard :) Things like the hostname, pod, cluster, AWS/GCP location are automatically populated, but you can also set custom tags in the config [1].
I asked for a status on the forums[1] but the it doesn’t look too positive. From a September 2024 steam post:
> After investigating the programming requirements, we have decided it’s best to cancel the Mac build for the time being. Porting the game over to Mac would take significant time and resources away from improvements to Fortress and Adventure Mode that simply don’t make sense for us to dedicate right now, given the low number of Mac-only users. There is a significant amount of work that would be required for maintaining a Mac build that would delay all patches in the future, and as we know, you all want patches faster. We aren’t saying it will -never- happen but do not count on it any time soon. We are very sorry to all the Mac users who have been waiting patiently for an update on this.
We’re running standard Prometheus on Kubernetes (14 onprem Talos clusters, total of 191 nodes, 1.1k cpu cores, 4.75TiB memory and 4k pods). We use Thanos to store metrics in self-hosted S3 (seaweedfs) with 30 days retention, aggressively downsample after 3 days.
It works pretty good tbh. I’m excited about upgrading to version 3, as is does take a lot of resources to keep going, especially on clusters with a lot of pods being spawned all the time.
We’re extremely pleased with Talos. Much more secure than Azure (our cloud of choice, unfortunately) which run a full-blown Ubuntu underneath. We haven’t run into any issues with Talos and upgrading is super easy with the talosctl tool, both Kubernetes and Talos version.
We currently have a thanos instance in each cluster. We could move it to a separate cluster to reduce some overhead, but the current approach works. We’re ingesting about 60Gi per day of metrics into the S3 bucket, so we might have to optimise that.
I can’t recall the reason for using thanos over mimir to be honest. I think thanos seemed like a good choice given it’s part of the kube-prometheus-stack community helm charts.
[1] https://github.com/seaweedfs/seaweedfs
reply