Actually those Sony headphones are one of the reasons I am interested in Linux instead of OSX. LDAC support is not present on OSX, but it is for Linux.
It makes a huge difference in audio quality over bluetooth in my experience.
Excellent, not sure what I missed before, but I have aptx and aac now! A good step in the right direction. Still would love LDAC and/or Aptx HD of course in the long run.
No, I meant AAC. The XM3s do not seem to negotiate AAC over BT on OSX. They fall back to SBC and obviously sound bad. AAC would be a step up, but AptX HD or LDAC would be ideal.
They work fine in my case on both iPhone and Android.
So as I was saying I'm tempted to move back to Linux for my work laptop for LDAC over BT support as well as other reasons of course.
This is not so different actually, and has been done in the past by other auto manufacturers (some may even still be doing this today). The Ford River Rouge plant is the best example I can think of with literal Iron Ore going in one side and finished cars rolling out the other.
Here's Tesla's supplier list.[1] Tesla makes the battery, motor, aluminum stampings, and plastic parts. They also now make their own seats and some of the circuit boards for the electronics. Everything else (steering, brakes, wheels, etc.) is outsourced. Makes sense; that stuff is standard.
I'm not sure about FreeNAS, but I do want to point out this is possible in the btrfs filesystem (sharded raid across drives). I'm not aware of any FreeNAS equivalents though utilizing it, but rolling your own is certainly an option.
That seems ill advised, one of the benefits of btrfs is it obviates the need for lvm and mdadm.
I guess in your case you have a more stable raid5/6 opportunity, but you're losing many of the raid benefits present in btrfs natively. I'd also imaging it could be slower or introduce IO issues others haven't tested. Though I really have no idea, never seen anyone do that before.
It's been great for our use case - we've been using it for about 5 years like this. We use it for rsyncing data onto as a backup (not the only one!), making daily snapshots. md is more flexible than zfs's raid, but less flexible as native btrfs raid.
btrfs is most certainly not dead. I actually think it's really gaining traction presently. It only became stable near Ubuntu 14.04 or so, and it takes people awhile (understandably) to warm up to a new filesystem.
It's great to see ZFS on Linux get a more stable footing. It's an excellent filesystem. As others have said I think the use case differs slightly from btrfs (though they are very similar in capabilities).
ZFS, to my eyes, seems more resilient. It has more levels of data checksums, The RAIDz model allows for more redundancy, and it just feels like a stronger enterprise offering (meaning stable and built for large systems and disk quantities).
btrfs brings many of the ZFS features to Linux in a GPL wrapping. What it lacks in resiliency, it makes up for with flexibility. Raid in btrfs, for instance, occurs within data chunks across disks, not at the disk level, meaning mixed disk capacities, and on the fly raid changes. I also appreciate the way it divides namespace across subvolumes while maintaining block awareness within the pool (cp --reflink across subvolumes, snapshots across subvolumes). It also doesn't have the ram requirements of ZFS (which aren't much of a data center concern, but are definitely a client level concern for workstations).
Either way it's a win, both great filesystems for Linux. With bcache supporting btrfs properly now, I personally don't have much of a reason for ZFS now. Two years ago I would have jumped easily to it. Your workloads and needs may differ, it's great to have choices!
Interesting findings. I'm curious also to see how fixed width stripes change the performance profile in btrfs (presently not supported). To be clear though, the raid5/6 write hole only applies to power failures with data in flight. It is still a concern, of course, but I think depending on your environment is acceptable to some (redundant PSUs, well engineered PDUs and UPS systems). Personally, I'm of increasing opinion that parity rebuilds aren't worth it anyway. I'd rather raid10, raid1, or raid0 depending on use. If I have to take a system out of production during parity rebuild (because IO activity is too intense for performant use), might as well not parity rebuild and simply reload the system on failure and rely on other cluster nodes.
Raid5 is not dead yet (https://www.cafaro.net/2014/05/26/why-raid-5-is-not-dead-yet...). The problem with failures during rebuilds is overblown, IMO. Manufacturer quoted URE failure rate (probability of failure to read) is overstated - instead of 1×10^14, they are mostly like 1×10^15 or higher.
Full disclosure: we're actually doing erasure coding in HDFS over Raid5 on servers (double insurance - if the raid array goes down, we can recover from other servers in HDFS). But our expectation for 6x4TB arrays is not for a 70%+ chance of a URE during a rebuild, rather a couple of percent. With ZFS or btrfs, it won't actually matter for us, as we'll only lose a block on a URE- that we can recover from the rest of the cluster.
The problem with failures during rebuilds is overblown
I thought I was the only one who believed that. I've said this on reddit before and ended on like -20 votes with people blatantly arguing I'm falsifying an "impossibility".
I've got roughly 30 arrays in production, between 4 and 12 disks in each. All are RAID5 + hotspare. If you believe the maths people keep quoting, the odds of seeing a total failure in a given year is close to 100%. I started using this configuration, across varying hardware, over 15 years ago and I've been growing in number since.
I'm not pretending one example proves the rule, or that it's totally safe and I would run a highly critical environment this way (before anyone comments: these environments do not meet that definition), but people have tried to show maths that there's a six nine likelihood of failure, and I just don't for a second believe I'm that lucky.
Well at least on Linux, by default almost everyone (using consumer drives) has their array in a very common misconfiguration. And this leads to raid5 collapse much sooner than it should.
The misconfiguration is the drive's SCT ERC timeout is greater than the kernel's SCSI command timer. So what happens on a URE is, the drive does "deep recovery" if it's a consumer drive, and keeps trying to recover that bad sector well beyond the default command timer of the kernel, which is 30 seconds. At 30 seconds the kernel assumes something's wrong and does a link reset. On SATA drives this obliterates the command queue and any other state in the drive. The drive doesn't report a read error, doesn't report what sector had the problem, and so RAID can't do its job and fix the problem by reconstructing the missing data from parity and writing the data back to that bad sector.
So it's inevitable these bad sectors pop up here and there, and then if there's a single drive failure, in effect you get one or more full stripes with two or more missing strips, and now those whole stripes are lost just as if it were a 2-disk failure. It is possible to recover from this but it's really tedious and as far as I know there are no user space tools to make such recovery easy.
I wouldn't be surprised if lots of NAS's using Linux were configured this way, and the user didn't use recommended drives because, FU vendor those drives are expensive, etc.
Don't forget the part where many consumer drives won't let you play with the SCT ERC settings, and some of them just completely crap out on URE and won't come back.
(My personal favorite was when I discovered a certain model of "consumer" drives we had thousands of in production claimed to not support SCT ERC configuration, but if you patched smartctl to ignore the response to "do you support this", the drives would happily configure and honor it.)
Follow the money who is selling the raid5 is dead story. The main worry is correlated failures if you have the Sam types.of drives in arrays and they reach their end of life.
Note that all the manufacturers aren't actually saying the URE is X. They are saying it's less than X, it's a cap. Therefore it isn't a rate. The actual rate for two drives could be very different, maybe even more than an order of magnitude different, but so long as it's below the spec's cap for such errors, it's considered normal operation.
So yeah, I agree, the whole idea in some circles that you will get a URE every ~12TB of data read is obviously b.s. We don't know what the real world rate is because of that little less than sign that appears in all of these specs. We only know there won't be more errors than that, and not for a specific drive, but rather across a (virtual) sample size for that make/model of drive.
For scaleable storage, get rid of conventional RAID for data. I'd like to see n-way (definable) copies of metadata, and single copies of data. On top is a cluster file system like GlusterFS. When a device dies, the file system merely rebuilds metadata, and then informs GlusterFS of the missing data due to the failed drive(s). And then the file system deletes the reference to all missing/damaged files from that missing drive.
No degraded state ever happens. This way Gluster knows not to even make requests from that brick. If the brick were raid56 and the cluster fs isn't aware, requests happen with degraded read/writes which suck performance wise.
Plus Gluster (or even Ceph for that matter) might use some logic and say, well that data doesn't even need to be replicated on that brick, it's better for the network/use loading if the new copy is on this other brick over here.
It makes a huge difference in audio quality over bluetooth in my experience.
*edit: Just realized I posted the wrong link. https://eischmann.wordpress.com/2019/02/11/better-bluetooth-...