In the US you can work on your own house, including electrical and plumbing (including natural gas and propane lines, I think). For minor repairs like replacing a switch or receptacle, I don’t think a permit is required. For more substantial changes a permit is generally required and as is an inspection. Supplies for doing all kinds of residential construction work are readily available at retail establishments that normal people visit regularly.
> The final aggregate read throughput reached approximately 6.6 TiB/s with background traffic from training jobs.
The Ceph team has been working on Crimson for years to get past performance bottlenecks inherent to the HDD-based design. I’m having troubles finding any ceph benchmark results that show any close to 100 GB/s.
The comparison is a little pears to apple. Similar nutritions but different enough to not draw conclusions. The hardware in the Ceph test is only capable of max 1.7TiB/s traffic (optimally without any overhead whatsoever).
I also assume that the batch size (block size) is different enough that this alone would make a big difference.
That difference is still pronounced, yes. But the workload is so different. Training AI is hardly random read. Still not a comparison which should lead you to any conclusions.
I’ve found that striping across two drives like the 980 Pro described here or WD SN850 Black drives easily gets direct IO read speeds over 12 GB/s on threadripper pro systems. This assumes a stripe size somewhere around 1 -2 MiB. This means that most reads will not need to be split and high queue depth sequential reads and random reads keep both drives busy. With careful alignment of IOs, performance approaches 2x of one drive’s performance.
IO takes CPU cycles but I’ve not seen evidence that striping impacts that. Memory overhead is minimal, as the stripe to read from is done via simple math from a tiny data structure.
well this really depends on what you use mdraid/zfs and as you said alignment. but only for reads.
if you just use dumb mdraid (without too much optimizations configured) (raid 10) + xfs and read/write on top of it you will end up with quite a high memory + cpu usage. but i/o still will be insanly fast.
I've written this more that it works, but its not a put another drive in and done solution. but if you just want to dumb a second drive into it and use mdraid/zfs you will have an overhead. of course if somebody tunes it and builds the application around it you can trim down the overhead significantly.
> Bridgy Fed connects web sites, the fediverse, and Bluesky. You can use it to make your profile on one visible in another, follow people, see their posts, and reply and like and repost them. Interactions work in both directions as much as possible.
Sure. That Party City gift card bought last week may not be honored today due to bankruptcy and certainly won’t be honored after the stores close in a couple months.
No, that argument applies to the owner of the gift card accounting for the fact that the gift card may not be used (and isn’t a great method assuming they intend to spend it and expect to get the full value).
What I’m talking about is the provider of the gift card writing off some portion of the liability of gift cards they have sold that people will never spend because they forget about them and lose them and so forth.
> “I moved across the country to work here, for a seasonal job,” she says. “We have people who have worked here for 10 years as seasonals, and made a career out of these positions. They trusted that the jobs wouldn’t go away.”
I suspect the other part of these careers involve seasonal work that covers a different part of the year, such as working at ski resorts.
And that’s not all that simple, as has been experienced by Solaris (never released(?) Linux branded zones, illumos (lx brand), and Windows (WSL1) developers that have tried to make existing kernels act like Linux.
It’s probably easier if the kernel’s key goal is to be compatible with the Linux ABI rather than being compatible with its earlier self while bolting on Linux compatibility.
I'm sure it's not trivial, but I was under the impression that illumos, FreeBSD, and NetBSD all have perfectly good Linux compatibility layers so it's clearly doable. (WSL1 excepted because NT apparently really doesn't want to be a unix-like)
From my experience working on it from time to time at Joyent, the parts that are implemented work pretty well on the lx brand in illumos. At the time, things like cgroups and namespaces were not implemented and there was no clear path to implement them. It’s kinda hard to participate in the docker or k8s ecosystem with such limitations.
I was hired at Joyent largely to work on bhyve so that Triton and Joyent’s public cloud had a way to run Linux VMs when full Linux compatibility was more important than the efficiency of zones/containers.