Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m one of the chorus. I actually preordered a TOTL Asus Zephyrus Duo about a month ago, and it came in last week. It’s been a dream so far — 16 real cores of AMD 7945HX, laptop 4090, hybrid graphics, two real screens! Main display is a quite color-accurate 16:10 miniled with 240hz freesync, lower display is a high-PPI IPS touch panel, dual raid-able M2 slots (I put in dual WD SN850Xs in raid0 and that’s showing ~14.5GB/s reads and ~13Gb/s writes in CrystalDiskMark), decent sound and webcam, keyboard is at the bottom edge…the only downsides are the touchpad size is a bit strange (portrait orientation) and the click on it doesn’t feel quite Macbook nice, no USB4, and the power brick is very brick-like, with a thick cable that doesn’t flex very well. Cooling is excellent due to the intake fans under the second screen combined with the lower heat output of the AMD chip, allowing it to run maxed out without throttling…they did this right. I can’t get Pop_OS to install yet, guessing it might need the AMD raid driver like Windows did, requires further investigation.

Anyway, excited for a Framework version too! While I prefer AMD integrated graphics to Intel, NVidia dGPU would be even better, and even better still if it was upgradable. Are laptop GPUs still available on MXM cards?



That is a pretty niche laptop, I think the Framework folks are going more for the every day driver kind of experience. I've got a Framework 13 and really like it, it is, for me, a pretty solid Thinkpad replacement. But the real icing on the cake will be when I update the motherboard for an upgrade. (since it is relatively new I don't expect that to happen until maybe next year).

On the Asus, are both screens touch screens? Or is only the lower screen a touch screen? I had looked at 20:2 type touch screens to do a sort of "media bar" setup on my desktop but didn't find anything at the time I could use. I'm wondering how well such a setup might work.


Why even try to use the hardware raid? I can't imagine it actually performs any better than mdraid, and with mdraid the drives are as portable as plain drives. You could destroy the special laptop and stick the drives into usb enclosures and access the array again on any other machine.


Tell me you work in ops with out saying you work in ops. :-)

Btw, this is excellent advice. Funny story, I have a FreeNAS device and the motherboard died and I thought "Oh my, I need to bring my ZFS volumes up on another machine, but I didn't have another machine with 6 SATA bays! I ended up having the drives all sitting out on the workbench connected a mainboard with 8 SATA ports so that I could create an archive of the data, and then got the mainboard fixed so I could re-assemble and re-use the FreeNAS but still it alerted me to the fact that I really needed a 6-8 drive cabinet if I wanted to do this again.


Heh, you were "lucky": I had a 4-drive NAS that died on me, and no motherboard lying around at all, let alone one with 4+ SATA ports.

I bought four cheap SATA-USB3 adapters, plugged them into two USB3 hubs, which I then plugged into the two USB3 ports of a Raspberry Pi 4, and arranged it all quite precariously in a small cardboard box that I cut holes into for airflow. Performance was terrible, of course, but it worked well enough until I could build a proper new NAS box.


The portability doesn't end with sata or usb ports.

With generic software raid (mdadm), even if you only had a single usb port and a single internal drive, you could image all the drives one at a time and then access the array of images on the single big drive. (not uncommon since usually time has passed by the time something fails)

It also goes the other way if you needed it to. Say a single array member was 4T but you only have a bunch of 1T drives, no problem, you can assemble 4 1T drives into a 4T container to hold the 4T image, and then use that image as an aray member itself.

Even if you don't have any loose external drives, you could even do it all via network shares with pieces residing on all of your other family members laptops and desktops, or every desk in an office, while they all still continue running windows and doing their normal jobs I mean.

Some of the possibilities are slow or fragile or both, so of course you don't set out to use 12 usb2.0 ports, but the point is essentially anything is possible, and you don't have to worry about predicting or planning for every possibility, you just don't have to care about how you'll recover the array in the future because it doesn't matter what form storage takes at that time, or what form you happen to have available. It would almost never make sense to do some things, but the point is that mdadm just doesn't care.

For a machine with only 2 or 4 internal drives where you want to use raid0 for max throughput, and don't want to rely on any special firmware support for booting raid0, just partition the drives so that /boot is a small raid1 across all the same drives, so that any of the drives could boot. Bonus, it automatically makes all the members of your main raid0 slightly smaller than the drive's nominal size, which means you can always fit them onto some other replacement drive later, even if the different manufacturers count bytes and formatting overhead differently.

I come from the days of sco unix on scsi hardware raid with full featured expensive cards and I do not miss it.


Okay, that is super creative. I love it!


https://github.com/Lillecarl/nixos/blob/master/shitbox/disko... this is my declarative partitioning scheme, I use mdraid, LUKS, LVM and btrfs. I also mirror my bootloader so if one drive dies I can still boot :)

Hardware raid is legacy :)


That's only until the machine in question is 5000km away and the soonest time you can get to it is in the three months.

Sure, for personal usage there is almost no usage for the HW RAID, but when you need to make sure what the system would always boot and it can't be serviced in hours/days - then you have almost no options for SW RAID.


Incorrect.

No problem to put /boot on a raid1 on a small partition across all drives, so that any drive can boot, and no problem to even include a whole self-contained remotely accessible recovery os. It's a little bit more work to set up, but if you are professing a need for that, then a little extra setup is de rigueur. I remotely administered a ton of linux boxes in racks scattered across the US like that for years. Although I had out of band serial console access and could do full bare metal reinstall that way, I could also do it from any neighboring machine that was still running in the same rack if I had to, with a combination of network booting and/or booting from any one of the normal drives normal raid1 copies of the /boot partition.

Further remote-able fallback options that I never even had to use but could: Local hands just plucks a hot-swap drive from any of my other machines and pops it into the bad machine. All drives had the same bootable partition and all drives are redundant and so they could yank literally any one from the wall of server fronts. Or, better, local hands just plugs in a thumb drive and I take care of the rest. Thumb drive is already sitting there for that purpose, or they could make a new one from a download. But with 8 to 24 hot-swap drives per machine, meaning 8-24 copies of /boot, I never even once needed local hands to so much as plug in a thumb drive.

There is just no problem at all with sw raid. It only provides options, not remove them.


> Incorrect

Did you even read my comment? It's quite clear what your environment was in the data-centers, with spares and remote hands.

Mine wasn't and then I say three months I don't kid or jest.

> No problem to put /boot on a raid1 on a small partition across all drives

This is exactly the problem. If the drive isn't totally dead (like it doesn't even respond to IDENT) then there is a chance what the BIOS/UEFI would try to boot from it and even succeed in that (ie would load the MBR/boot app) and then there is no way to alter the boot process at this point. HW RAID card provides a single boot point and handle the failing drives by itself, never exposing those shenanigans to the upper level.

Like sure, you are happy with your setup, you never had a bad experience with it, you always had OOB management and remote hands - but it doesn't means what it a silver bullet working 100% of times for everyone.

Yes, I've seen systems with SW/fake RAID failed to boot because the boot process failed after selecting a half-dead drive as a boot device, with my own eyes. Thankfully I was geographically close to them, not 5000km away.

Yes, I serviced and prepared systems for the 5000km away divisions and they are really serviced only a couple of months in a year, all other time you need an extremely urgent reasons why do you need to a rent a heavy 'copter to go there. No, there is no remote-hands there. The maximum point of IT-competency there is raking bills with satellite Internet.


The house could also burn down. The point was there is nothing hardware raid makes uniquely possible, or even merely better, or even merely equal.


Never used disko, are there any gotchas? Will it format my drivers if I run nix rebuild?


I bought a drive enclosure that has a hardware RAID built in, and I’ve been pretty paranoid about portability from the moment I configured it.

It’s probably time for me to figure out converting over to software raid.

Thanks for the nudge.


+1. Years ago I had a Raid1 mirror that failed because the controller itself went bonkers, and both drives were accumulating errors. Luckily I was using mdraid and could recover their files by using testdisk [0] on both of them separately as USB drives on a different Linux machine. Was a really really long process; I pray the storage divinities not to have to live those couple days again.

[0] https://www.cgsecurity.org/wiki/TestDisk


Good call. I have manually repaired a few mdadm arrays in my time that would have surely been complete losses in the hardware RAID systems I've encountered.


How's the battery life on it?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: