Hacker News new | past | comments | ask | show | jobs | submit login
How to Dual-Boot Ubuntu 20.04 and Windows 10 with Encryption (mikekasberg.com)
144 points by Fiveplus on Oct 28, 2020 | hide | past | favorite | 154 comments



Or, hear me out, buy cheap SSD for each operating system. For as low as $200 you can have a solid operating system of any flavor. Not an option because you're running thing on laptop and is not good to tear the laptop apart to replace SSD? OK, then how about:

Run each operating system in it's own VM and you can encrypt its folder. Not an option because you want to use games on your Windows and those are notoriously difficult to run inside a VM? OK, then how about:

Run a hypervisor on bare metal, like VMWare ESXi. You get now each operating system to be running on bare metal just like if it's alone and on Windows you can definitely play those notorious difficult games.


> Not an option because you're running thing on laptop and is not good to tear the laptop apart to replace SSD?

One still can put the less used system into an external SSD and boot from that. Much simpler.

The lengths some people go to continue working with Windows and the hassle they are putting up with for that amazes me.


> The lengths some people go to continue working with Windows and the hassle they are putting up with for that amazes me.

Agreed, even gaming on Linux has progressed to the point that unless you're a competitive gamer you can play nearly any Windows game on Linux thanks to Valve's Proton, as well as the advancements made by the Wine team and the growing Vulkan support by game developers.


I am a only Linux users and I want to challenge this myth of Proton just working. To be honest I am disappointed that people are misleading others, the Proton DB website will give Platinum badge to games that don't just work See Fallout NW https://www.protondb.com/app/22380 it is Platinum but check the comments for the issues reported and workarounds that might work (I had to give up for now) , For the Gold badge see example https://www.protondb.com/app/719040 , you need custom proton versions and customization.

Can we please be honest and say that with some or a lot of work we can get many games running in Linux but is expected of you to know your way around Wine tech to configure and fix stuff.

Similar complaint about Arch/wayland/CoolShit2000 works great for me but the full truth is hidden that it kinda works if you use this DE, and this Video card, and not use that feature and you read the wiki before you do anything.

Any other Linux users that agrees with me and is tired of the misleading "it works perfect now"


> "Any other Linux users that agrees with me and is tired of the misleading 'it works perfect now'"

So, I never said Proton was perfect or that it works with every game. I also mentioned Proton as only one of several technologies for playing previously Windows-only titles on Linux.

I get where you're coming from, and I have had my own frustrations with Proton in the past. With that said, even some native Linux ports have been problematic (Rust, 7 Days to Die, etc.) so it's not just Proton itself with the issues. Overall though, my experience has been overwhelmingly positive lately, and certainly far better than it was ten years ago.


I agree that wine,DXVK did a lot of progress and I can say that is an impressive technology. The issue is that the number of games that just work is small not large as it was implied. Some popular games will have some patches you need to compile into wine or find a pre-build version with those and other less popular games don't work or worse stop working half way through(happened for me with Elex),

I would say if you want to play games on Linux that are not official you will need to be prepared for some reading and trial and error, do you disagree ?


I will agree Proton isn't perfect, but I will also agree that outside of pro-Gamer needs, it is a platforms in and of itself that has a very good selection that I certainly can live with, and certainly seems much more appealing than e.g. gaming on MacOS.

I didn't have to configure a single thing to get excellent and bug-free performance on games like Witcher III and GTA V, and a host of others too.


IT is true that we can enjoy mroe games on Linux then Mac users that upgraded to latest OSX.

It is also true that some games are supported by Valve and they should just work.

But there are more games that don't "just work" then games that do. I would prefer to the community would be more honest about it, like if you want to convince someone to try Linux and you show him that his preferred game is "Platinum" or "Gold" but the reality is different then the Linux community will become even more of a joke.


As a linux only gamer, I wouldn't say "nearly every game". Proton has normal wine issues. Check ProtonDB for a compatability rating before buying any windows game on steam.

Most all of the games worth playing (factorio, subnautica, KPS etc) either have native linux clients or work fine via proton/wine, but the latest and greatest non-indi titles generally don't. Anti-cheat/DRM/spyware is a linux killer because it wants to inspect the operating system during play. Whatever the latest batman game is called... it probably doesn't work.


> Most all of the games worth playing (factorio, subnautica, KPS etc) either have native linux clients or work fine via proton/wine

This is my conclusion as well. Almost everything I want to play runs on Linux one way or the other, so this has become a non-issue for me.

I agree the latest DRM'ed AAA game probably doesn't work (yet), but I wouldn't want to play it anyway.


Does that also include the anti-cheat crap that is required for any online play these days?


Tarkov disagrees


In fact it disagrees twice: I used to run Windows in a VM with GPU passthrough, performance was excellent, but I still couldn't play Escape from Tarkov as its anti-cheat allegedly started kicking or even banning users running the game in a VM.

There are workarounds to hide the fact that you're running a VM from the guest, but it's not worth the effort and in fact the more workarounds you apply, the higher the chance of getting banned if you trip their detection code.

Other than dumb anticheat software, gaming on a Windows VM is feasible, if not expensive, when Proton doesn't work.


>The lengths some people go to ...

No kidding. I have a hacked-together solution including 2x GPUs, an HDMI 2.0 KVM, a "USB switch" which is flaky as hell (because the KVM emulates HID devices / doesn't support USB 3.0.), the guest (Windows) and host (Linux) share memory to avoid latency in the audio path, a second NVMe drive so the guest can have native access to storage, a second USB controller so I can pass USB 3.0 devices to the guest, and I bought an entire new motherboard and CPU (AMD TR 3960x + TRX40) because the Intel non-server silicon doesn't do PCIe ACS[1] and I got sick of building a custom kernel, etc.

All this so that I can essentially play two games on Windows that I can't/won't play on Linux. One is Stellaris: which runs on Linux but has massive issues w/ Wayland. The other is FFXIV (an MMO) which I'm sure you could coax into running w/ Wine, but I don't want to get banned from an online-only game because of some overzealous anti-cheat thinking Wine is a hack.

At this point I think I'm a slave to the sunk-cost fallacy. I've entertained buying a 300$ KVM[2] just to simplify this setup a tiny bit. I'm severely constrained in terms of case/motherboard/CPU options because of the insane amount (& spacing) of PCIe I/O required, before getting a proper system that supports ACS I would constantly run into weird QEMU/KVM/kernel bugs, etc.

I want off Mr. Bone's Wild Ride.

[1]: http://vfio.blogspot.com/2014/08/vfiovga-faq.html [2]: https://store.level1techs.com/products/kvm-switch-single-mon...


If I have a win10 pc now, how do I install ubuntu onto a secondary drive? Just plug it in, boot to a flash drive with ubuntu, then install to the new drive? Is there some quirk about boot order or grub or something I need to be mindful of?


> Run a hypervisor on bare metal, like VMWare ESXi. You get now each operating system to be running on bare metal just like if it's alone and on Windows you can definitely play those notorious difficult games.

This is exactly how WSL2 works. Windows and Linux run as guests under Hyper-V.


This depends on which O/S one considers primary, and which are their requirements.

For example, this architecture prevents other hypervisors from running¹.

If one wants to run, say, a fancy filesystem that is native (to Linux) and/or stable, again, one can't.

If one considers Linux the primary O/S, they could give a shot at VGA passthrough - I assume that if the system supports ESXi, it should support VGA passthrough as well. I personally prefer it to a native Windows - besides not having to perform reboots (which is minor), I like to have a snapshottable system. The caveat is that if one wants a very stable guest, they should reserve the GPU for that purpose (I do).

[¹] https://docs.microsoft.com/en-us/windows/wsl/wsl2-faq


Hyper-V would not be able to run Red Dead Redemption 2. VMware ESXi with a Win10 guest OS can.


Wat? I play games all the time on my Windows 10 + WSL2 setup.... which is exactly as I described: Windows running as a guest under Hyper-V alongside Linux.


And you have RDR2 running in that configuration? I mean to run RDR2 in your Windows guest OS that runs under Hyper-V? Because I tried that and it won't run. I could only run it under VMWare ESXi


I believe the point he is making is that the games can run in the native Windows above Hyper-V and then WSL2 / other Linux installs can be run under Hyper-V from the native Windows OS.


As far as I understood it, Hyper-V is a native hypervisor like Xen or ESXi. So if you're using Hyper-V even your Windows is running as Guest alongside any other Hyper-V VMs and WSL2.


Yes, although it has a "root partition", which could only be Windows up until a month ago[0].

[0]: https://lore.kernel.org/lkml/20200914112802.80611-1-wei.liu@...


yes,

And I can have running wsl2 session with RDR2 running.

as far as I can tell, enabling wsl2 (thus hyperV) did not slow down windows in any meaningful way I could find.


I was also confused by this recently.

Enabling wsl2 enables Windows' "virtual machine platform", which Windows itself will then run on top of as a guest.

And yes RDR2 is running stable 100fps uwqhd at max settings on my machine under these conditions.


I tried to find the official doc that explains it clearly, but cannot otherwise I'd share.


One OS per SSD is what I do on my desktop. Though these days the only thing I have been using Windows for really is Matlab (anyone get matlab working in Wine or otherwise?), so I really probably could just run a VM now.

My progression has been:

-> 2000 dual-boot Gentoo/Windows (mostly because of gaming) -> 2012 (? I think, maybe 2014) Gentoo host with Qemu guest Windows GPU Passthrough (which was still sort of a dual boot since I could actually boot into that Windows SSD normally) -> whenever proton came out, Gentoo 99% of the time and rarely boot into Windows getting rid of GPU/SSD passthrough because it was cool but ultimately too much hassle -> now almost never boot into Windows (once every few months) except when working with one specific researcher who only uses Matlab, because Win is where my working Matlab install is ...

Inertia has kept the dual boot around because I don't want to set it all up again in a VM really.


I use matlab under linux it works fine for my usage is there anything specific that doesn't work that means you need windows?


I guess it's been years since I tried. It seems that Matlab does work fine in Linux now? Based on a quick search, that seems to be the case. I'll give it a go. Thanks for your comment.


> (anyone get matlab working in Wine or otherwise?)

Perhaps you could try GNU Octave.


octave is not a 1-1 replacement. If you are sharing and contributing to a matlab project, you probably want to run it in matlab


> Run a hypervisor on bare metal

This is how Qubes OS works: https://qubes-os.org. Yes, video passthrough should be possible: https://qubes-os.discourse.group/t/list-of-programs-that-wor....


Can you get an OS running on a bare metal hypervisor to have video output? Does PCI passthrough work well enough to do that?


Yes


Could someone explain what VMWare ESXi is? Is is just another Unix-like operating system? Is it compatible with things like Nvidia GPUs?


It’s an operating system itself but it’s only job is to allow you to run virtual machines.

It has a web gui allowing you to vnc* and interact with the virtual machines using only your browser, or you can install apps on your PC for a better experience.

You can limit the resources that each VM has, only a portion of the RAM, only some of the CPU threads etc.

It has a free version, but IIRC the free version limits you to 8 CPU threads per VM.

You can also pass through hardware to an individual VM, .e.g. a whole graphics card and a USB keyboard/mouse. This means that if you were looking at the monitor and using the keyboard/mouse then you wouldn’t be able to tell it’s a VM. If you had two of each peripheral, you could run two local machines from a single tower.

I think Linus did something similar with unraid (same idea) and ran 8 (?) gaming VMs from a single tower using 8 GPUs, 8 monitors etc.

Note that nvidia doesn’t like you using GPUs in VMs and actively fights against it in the drivers but there are workarounds.

*not actually vnc, something propriety.


It’s VMware’s hypervisor. It installs to bare metal and you run vm’s on top of it.


How would you run ESXi on a bare metal laptop with the ability to easily switch between multiple GUI guests?


Def wouldn't be fun due to one video card pass through. You would have to remote in and swap pass through or just use one without gpu


Bingo. VMs are totally underrated. I can't remember the last time I had a dual-boot setup.


> Or, hear me out, buy cheap SSD for each operating system.

Probably gonna need a tutorial for that, too. Was trying to set that up this weekend, but there didn’t seem to be an obvious option for dual booting with full disk encryption in the Ubuntu installer.


No special configuration is needed, as long as the EFI partition, the `/boot` partition, and the root filesystem (or LVM etc) are all on the same drive. Windows will overwrite the bootloader on its own disk but doesn't touch others.


Run ESXI on my laptop, got it. That sounds reasonable for most users ;)


>Or, hear me out, buy cheap SSD for each operating system. For as low as $200

You can get a 240gb SSD for less than $30 these days.


Can you recommend one which don't die fast?


Honestly, I don't really know anything about their reliability. I got the Kingston A400 because it was a name brand and it hasn't died yet. So that as a sample of 1 is fine.


This is a great article. I have a different strategy that I think is "better" in a few ways but definitely still complicated.

(I'm assuming this is starting on a machine that already has windows on it because that's how my computers generally come (although maybe not the future since Lenovo is selling Thinkpads with Linux now):

Step 1. Install Ubuntu (let it set up the dual-boot stuff for you). The main advantage of this approach is I don't have the issue where grub can't boot windows as described in the article. (NOTE: ubuntu's installer won't do encryption in this setup for some reason: Ubuntu please fix this and save me the following steps!):

Step 2: reboot into the installer. Now things are going to get crazy.

Step 3: shrink the ubuntu partition as small as it will go (resize2fs -M ...). Then create a new partition of the same size at the end of the drive and copy the data over to the new partition.

Step 4. delete the original ubuntu partition. replace it with a /boot partition and a luks partition.

Step 5. Copy the boot stuff into /boot, copy the rest of the data onto your new encrypted / partition (i usually do lvm here also). chroot into the /, do the mounting stuff TFA suggests, install lvm/dmcrypt/etc. Reconfigure your initramfs and run update-grub.

Step 6. Delete the copy of the original ubuntu partition made in step 3 and resize the encrypted partition as needed.

OK I agree that's a pretty ridiculous sequence and I wish Ubuntu would do it for me, but it's pretty cool that it can be done at all (takes about an hour).


I'm curious: why not run one or the other in a VM? Looking at Mike's resume, the technologies he is working with (LAMP stack, Ruby, PHP) seem like they would run perfectly fine in a VM.

I've been doing embedded development (with a Linux based toolchain) on a Windows 10 host with Ubuntu in a VM, and it seems to be plenty nice on modern hardware (and a lot of RAM). And I don't have to make a choice to boot back and forth: Two systems for the price of 1.5. :)

Edit: My question aside, I fully appreciate the article's primary goal, which is to document the nonobvious hoops the author had to jump through to get dual boot to work. Thanks!


I dual-boot on my media PC, where Ubuntu is my primary system, and I boot into Windows to play a few video games which don't work in Proton.

That's one case where you don't want to pay the performance overhead for virtualization, but I'm sure there are plenty of other cases where dual-booting is preferred.


I agree it's not for every use case, but in Mike's specific case, I'm still curious what his bottleneck or "this just doesn't cut it in a VM" piece of software is.


Do you know what his use-case is? I don't see it stated in this article.


It's a reasonable question. I've run Linux as my primary OS for years, and I have no desire to run Windows as my primary OS. (Linux is what I'm used to.)

My primary goal with the Windows partition was gaming, so it made more sense to dual boot than run Windows in a VM. So dual-booting made the most sense for me, even though as you say a Linux VM can be a great dev experience too.


I run Ubuntu on a Dell laptop. I still keep the OEM Windows copy installed for one reason only: Contacting Dell support for warranty-related issues. This usually requires running their Windows-only diagnostics tools.


It's easy enough when things are on separate disks. Disk partitioning for dual-boot always was and always will be crap.


That is until you use TPM to decrypt your windows partition automatically. If you boot the Linux boot manager on another drive, Windows won't decrypt its drive automatically. So you have to boot the Windows boot manger. At which point is easier to just select your boot drive on Startup.

You can however use other tools to encrypt/decrypt the windows drive besides bit locker and it will work.

TPM2 to decrypt your Linux drive on boot is a bit of a pain too.


What's the point of using a TPM to decrypt your disk automatically? You use disk encryption to be protected against the case that the computer gets stolen. But that means the attacker has your TPM and can use it.

The only way to prevent that is requiring a BIOS password at boot (not very common) and secure boot (and like in the article many Linux users skip that because it can require extra knowledge and extra work.)

So you replaced entering a disk password by entering a BIOS password. What's the benefit from usability perspective?

(Yes, secure boot would add security I don't try to deny that.)


The answer which is missing from the sibling comments is that it allows Windows to restart your computer to install updates (which Windows loves to do!) in the middle of the night and it can boot back up and finish the installation without user interaction. It can even restart more than once. This benefit doesn't exist in some of the dual boot setups that have been described here though.

As others have said, attackers are meant to be prevented from getting in directly by the Windows password, and they can't just put the disk in another machine to read because it wouldn't have the right TPM. I don't know enough about TPMs to know what stops an attacker from installing an OS on another hard drive on the same motherboard and using that to access the TPM to decrypt the target drive, but presumably that's been thought of. It's certainly susceptible to turning the machine on and then physically lifting the RAM out into another machine to extract the key (cooled RAM keeps its contents reliably for long enough for this to be feasible), or dismantling the TPM, but both of these are high skill attacks.


> I don't know enough about TPMs to know what stops an attacker from installing an OS on another hard drive on the same motherboard and using that to access the TPM

The TPM cannot prevent that, you need secure boot. That will prevent the machine to boot into another operating system. But only if you delete Microsoft's public keys from your machine. Otherwise the thief can boot Windows and all Linux distros that have a Microsoft-signed shim. Deleting Microsoft's key on a machine that is supposed to run Windows as dual boot would mean that you need to sign Windows yourself. Probably doable, but yet more hassle.

So probably disk encryption with TPM, i.e. no manual disk password raises the bar enough for the average thief bringing the device to a dodgy backyard shop. But not for a somewhat dedicated attacker. No state level resources needed here at all. Given enough time I might succeed, and I have zero practical experience which such attacks, I am just a normal software developer (having worked a bit with system boot, but not a whole lot).


> The TPM cannot prevent that

Wait, why? I mean, they would be able to access the TPM, sure, but not the keys that would unlock the hard drive. This was my understanding when I played around with TPM: if the boot order has changed, no keys are given to the running OS. You can only reset them, but that just leaves you with an encrypted hard drive and no password to unlock it. I've only used it with LUKS though, not with Windows, but it'd be weird if the approach was different


As I understand it, the idea is that even without a BIOS password nor a disk encryption password, your OS can boot and only then request a session password. So effectively you only have to enter one password, rather than 2 or 3. Even if the laptop is stolen, TPM included, you still have to guess the login password. This is how Bitlocker et all operate and to be honest makes a lot of sense for a regular user (and practically everyone unless your enemy is a government-like entity than can decap/sniff TPM chips...).

Secure Boot actually does not help much here... since the idea is that thief wouldn't be able to change the disk contents.


Interesting - LUKS and Apple's Filevault take the opposite approach; wherein you must provide a password from an authorised user in order to access the contents of a filesystem/logical volume. Meaning that without authorisation your data is still encrypted.

I'm not completely familiar with Windows service management and how it handles logins - but doesn't the TPM auto decrypt function of Bitlocker mean that if you have a compromised system which has a dodgy service that starts at boot time it can potentially exfiltrate data from the machine without a user logging in?

Of course, if this is the scenario you're experiencing you have much bigger problems already haha.


You still need a password, but you you don't have to choose/enter a separate password for your disk encryption. With TPM, your computer decrypts and boots up to your OS, where a thief would have to enter in your OS password. They can't bypass it by patching the OS files or cracking the hash stored on disk, because the disk is encrypted. They also can't bypass it with a patched kernel/bootloader, because that would change the PCRs, causing the TPM to no longer auto-unlock.


> What's the point of using a TPM to decrypt your disk automatically?

It's moving the slider towards the "ease of use" end of the "security - ease of use" spectrum. It essentially outsources security to the Windows OS logon, at which point a whole bunch has started up in the background and the attack surface has substantially increased.

But it saves having to type in a second password.


I've done it for so long and have never had an issue.

Every once in a while Windows overwrite the boot sector and you have to stick in a rescue stick. Not a huge problem.


With uefi that seems to not be a problem.


Only time I've faced an issue was if Linux was installed first, Windows would modify the existing boot partition. But for simplicity and to ensure isolation, I use separate drives as well.


I used to dual boot windows xp and windows 98 on same hdd, never faced any problem. However, Linux and Windows always conflict if dual boot is on same hdd.


Wow - 20 years on, and windows, OSX and Linux still don't peacefully coexist.

Seriously it shouldn't be hard - you should be able to install the OS's in any order and have the installer simply set up a boot menu asking which to boot. There should be some cross platform filesystem or LVM-like system so storage space can be dynamically shared with all the OS's.

Uninstallation of the OS should be as simple as installation too. Today as far as I'm aware no major OS has an uninstaller.


Formatting the partition(s) your OS is installed on is the uninstallation method. Uninstalling from the disk implies that you'll reuse the existing filesystem on the computer.

Each of your named OSes has different preferences as to which filesystem they will run on, each filesystem with different capabilities and expectations. Windows - NTFS, OS X - APFS and HFS+, Linux - depending on which flavour you choose. Thus an uninstallation program isn't really much use, because the installation of anything new should blow that away.

Or is it you wish that your computer had a bootstrap program when you powered it on which would source an installation image (for Windows, OS X, whatever Linux flavour you prefer)?

To that last one Apple machines do offer this in a limited fashion in the form of "Internet Recovery", plug in a blank disk into a Mac or delete the GPT on your SSD and it will offer to connect to wifi/ethernet to download the installation image for either the version of OS X it shipped with, or alternatively the version last installed (I cannot recall which). Though of course it won't help source any installation media of the other OS families.

This installation image is a cut down version of OS X which will provide just enough drivers to get the system functional, the disk drive formatting tools, a command line interface with a number of basic system tools to troubleshoot with, and a functioning web browser.


> Formatting the partition(s) your OS is installed on is the uninstallation method.

Except it isn't... Start with a dual boot windows and linux system, and delete the linux partition... And suddenly windows won't boot! Or with some UEFI setups, leaves linux as a default boot option that half boots linux and then won't complete because it can't mount the root filesystem.

It's just poor design.

I'm imagining something more like the UEFI menu having a "Right click, Delete" option next to each bootable OS. Clicking delete would run that OS's uninstaller code and refresh the menu.

You could likewise have a "Add new OS" button in the UEFI firmware where you can select from a list of available OS's to download from the net, or provide your own URL to an iso or some installation metadata file.


Your problematic example seems like a limitation coming from implementation rather than a technical problem.

Forgive me if I’m incorrect but on most UEFI based systems shouldn’t the boot loader and all it’s dependencies live in the FAT32 EFI filesystem?

I know GRUB2 (which tends to be the most frequently shipped Linux bootloader for the major distros) is self contained in that regard; boot stanzas pertaining to Windows just point at Windows’ `boot.efi`, with the GRUB chain-loading that EFI program.

Deleting the Linux partitions your using should not affect that Windows entry.

Of course, it will knacker your Linux entries; though depending on how your distro does things, if the kernel and initial ramdisk are in the EFI you may get dumped into an emergency prompt on the root of the initial ramdisk (your “half-boot” example) and have the kernel itself tell you it can’t find the filesystems it’s supposed to use as the system root. But if GRUB can’t find the kernel (as in the case where the kernel is on another filesystem) it should tell you it can’t find it and dump you back to the menu, to choose how to proceed with this new information.

Ubuntu’s implementation of GRUB as one example includes a menu option to bring you to your UEFI configuration menu, where you may be able to an alternative bootloader program or boot from another storage medium.

Of course none of these things are new-user or non-technical-user friendly.

I do agree with you that these are poor user experiences, and that the low level user experience should be improved. But in certain cases - the half-boot example, this is actually very useful as a last resort troubleshooting stage.

Now I can’t speak for all UEFI implementations because many are garbage and barely functional, but most decent implementations do allow you to delete bootloader entries from them either from the “bios” menu or via “efibootmgr” on a Linux live disk - that said, this doesn’t delete the partitions underneath

An additional item of note is that most operating systems at install time treat themselves as first class citizens at best (Linux), or the only operating system to be installed on the machine (Windows). There isn’t likely to be much incentive for the developers to improve that.


These days, I use an SSD for each OS —Windows 10 to play games, Ubuntu for everything else— and it works perfectly. Before the cheap SSD era I found dual booting to be a nightmare, though it taught me a lot about GRUB workarounds :)


Even with SSDs I still find dual booting to be painful enough to avoid.

Mainly because of how disrupting it is to have your dev environment torn down when you reboot and then having to set it all back up when you come back. Even with tmux and using resurrect it's not a seamless experience. This is an issue even with a few second turn around time on the dual boot itself.

Around 8 or 9 years ago I used to dual boot, then I transitioned to a Linux VM with Unity mode (vmware workstation's seamless mode to run apps in their own floating window) then to WSL 1 and now WSL 2. The only way I think things will get better is going native Linux with a GPU pass through KVM based VM running Windows to play games from Linux at native speeds without dual booting but it's a huge pain to set this up properly.


Can you share files between Ubuntu and Windows, by mounting BitLocker on Linux and LUKS/ext4 on Windows?


You could do that with TrueCrypt/VeraCrypt. Ext4 read/write for Windows, Btrfs for Windows, NTFS for Linux/Mac (FUSE or Paragon). APFS aldo exists for Windows


None of these do what the OP is asking. AFAIK there is no way to share files between the systems with the setup described in the article (short of something like a network sync or using an unencrypted USB key)

> Ext4 read/write for Windows, Btrfs for Windows

Useless if using LUKS.

> NTFS for Linux/Mac (FUSE or Paragon)

Useless if using BitLocker.


You can also play with VMs, example [1].

Other than that its a reason you don't want an OS specific FDE but a platform agnostic one. You can, of course, swap to such.

[1] http://augustbonds.com/luks-encrypted-ext4-drive-in-windows-...


No mention of the system time issues which can wreak havoc?

It's best to fix this on linux with timedatectl. Changing it on windows often ends up being a cure worse than the disease.


I’m curious what issues it causes in windows? I’ve never had much of an issue with this. I always change it in windows and I think it’s as simple as a registry edit.


> I always change it in windows and I think it’s as simple as a registry edit.

Simple means different things to different people. I'm fine with messing with the registry, but I know several (technically inclined) people who are not.

I find it easier to change in Linux because it's either a setting or in a config file, depending on the distro.


how did you do?


Most desktops will have a graphical configuration you can use.

Alternately, you can use timedatectl on distros with systemd.

If you use a non-systemd distro, try hwclock.

Based on my ddg-foo, I was mistaken about being able to configure it via a config file, short of putting it in .profile or some such other thing. That seems hacky though, so I wouldn't recommend it.


what part is breaking my windows clock? i have to reset it with every reboot. do i change it in my bios so windows will be right and linux will be wrong, then tell linux to fix itself? because windows time sync setting sure as hell never actually syncs the time automatically.


The issue is that Linux sets the bios to UTC, while windows wants the bios to be local time zone.

I just do a regedit in windows to fix it. But you can do the change in either OS


You can always go full there-is-only-Zulu and set the Windows timezone to UTC. (insert head-tapping meme here)


That would fuck over any relative time calculations in calendars (including meetings in google calendar) though wouldn't it?


It shouldn't; I don't use Google Calendar at the moment but Outlook seems to have no problems with people in multiple time zones all seeing the correct time for a meeting.


> The issue is that Linux sets the bios to UTC, while windows wants the bios to be local time zone.

UTC is the default in many Linux distributions, but I wouldn't say "Linux sets the bios to UTC"; I've only ever seen it be an explicit choice.


Here's a nice article on how to do the fix, https://www.howtogeek.com/323390/how-to-fix-windows-and-linu...


Possibly move to UK is a solution :)


But only live there in the cooler months (when it isn't day light saving time)


A better alternative is Iceland, it uses UTC the whole year round...


Much easier to run Linux on encrypted partition and have a Windows 10 VM in KVM.


Isn't the point of dual booting that there are things you can't do in a VM, like gaming?


There is also a thing called Virtual Function I/O and you can read more here: https://www.reddit.com/r/VFIO/ (basically you can pass hardware directly to VM and enjoy)


Note that for an NVIDIA consumer GPU, you can't really use it in the host and VM at the same time practically because they try to restrict that feature to their data center cards. You'll need another GPU to render your Linux environment if you pass through your main GPU, and most gamers are likely using NVIDIA consumer GPUs. If you have a F series Intel CPU or AMD cpu with no onboard graphics then,that means you'll need a free PCIe slot and another gpu (but even something like a GeForce 1030/radeon 570 will do for non-gaming Linux use)


AMD customer GPUs have SR-IOV disabled too.

If you want to use your customer-grade GPU on both the host and guest, there's only a single option: Intel. However, the current product on the dGPU market, Xe MAX isn't exactly the top-end GPU that people are searching for.


Except it doesn't work in a ton of cases. I've tried for days.


The author isn't encrypting /boot. GRUB does support this if you're using LUKS1 (and LUKS2 is coming soon). I wrote a guide for doing this with Alpine:

https://battlepenguin.com/tech/alpine-linux-with-full-disk-e...


Has there been any work on speed improvements?

I did this LUKS1 encrypted boot partition thing on a relatively recent laptop, and GRUB takes about 25 seconds to validate the passphrase. Once it loads the initrd, the early kernel environment validates the the root volume passphrase in about 2 seconds, presumably because it's using an optimized implementation that's available in the kernel but not in GRUB...

Right now I wouldn't recommend doing this unless you have a really good reason for encrypting the boot partition. Most people will be better off having an unencrypted boot and enabling secure boot with your own platform key instead.


My understanding is the LUKS header block contains a key actually used to decrypt your data, but that key is itself decrypted by a key derived from your password or other auth data. grub is using the first key to obtain the second key, the kernel already has the second key (or you'd have to enter your password again).

When you set up the encrypted drive, it deliberately picks settings for this that will be as slow as possible to resist brute force attacks while also aiming not to take over 30s on your current CPU. You'd have to manually specify a lower iteration count to cryptsetup to get a faster unlock.


I'm on a different machine now, but I'll have to go back and verify what I did -- it sounds like I may have set it up incorrectly.

My /boot is a LUKS1 volume, and I thought that my GRUB boot passphrase for this volume was in the first slot, and that I was using the same iteration count as the root volume.

My / is a LUKS2 volume with a different passphrase that I need to enter after the initrd has been loaded. Decrypting the root volume is fast, so I suspect I set this one up correctly. Once the root volume is decrypted, I have a separate key on the filesystem to re-decrypt /boot so that it can be mounted without re-entering the /boot passphrase. This part is also fast.

Something must have gone wonky with the boot passphrase. Either a crazy iteration count or a key slot that forces GRUB to not test it first.


Ha. ~100k iterations on root, ~2 million iterations on boot.

Whoops...


For me it only takes 1 second. The wait time is configurable, by setting the iter-time: https://unix.stackexchange.com/questions/497746/how-to-chang...


Has anyone talked with Ubuntu about tweaking its install process to make this easy? I imagine many people have a machine they want to run Windows on & don't want to pay for (or lug) an extra drive. It's probably a relatively small change to the installer, and a big win for ease-of-use.


I just went from a linux laptop to a dual boot laptop

I already have enough trouble with an unencrypted file system

Not sure where to keep my user files. On NTFS, they are hard to reach from linux. On ext, they are hard to reach from Windows. On exFAT, there is no journaling

Now I put them all on NTFS.


The times I've done this in the past I've had to rely on buried forum posts or "hope for the best" with the linux installer options. Includes ambiguous reports on how encryption effects ssd wear/trim/etc.

I recently upgraded my laptop and stuck a new nmve ssd with hardware encryption features. It does have some security trade-offs but does protect decently against common theft and is hella convenient that it's all mostly managed in hardware beyond a pre-boot unlocker and a suspend-state kernel hack. Still had to dig deeeep into github issues to get it going in Linux. Works out of the box with Bitlocker allegedly.


Does anyone else have their system time messed up by dual boot?


That's because Windows stores time in the RTC as local time, while Linux stores it as UTC. You can make Linux adjust to Windows or Windows adjust to Linux, but it's probably better to make Windows behave like everyone else: https://www.howtogeek.com/323390/how-to-fix-windows-and-linu... (Option 2)


https://wiki.archlinux.org/index.php/System_time#UTC_in_Wind...

IME it worked better the other way around, setting Linux to local time. But the Wiki recommends the opposite.


Thanks for this. I've had to just make it a habit to adjust the time anytime I boot into windows.


Omg, I set that registry setting yesterday

But the time is still an hour off


oh thanks, I remember glancing over that information at some point but I rarely boot into windows these days so was never motivated to look into it.


That happens because Windows and UNIX interpret the time differently (Windows as local time, UNIX as UTC). So you either change UNIX to use local time or Windows to use UTC by changing a registry value. A quick google should tell you how you can achieve that


From what I understand, kvm/proxmox lets you switch OS in Linux, HyperV in Windows, and macOS hypervisor even on Apple Silicon: https://developer.apple.com/documentation/hypervisor/apple_s...

Will these all allow encryption?


I gave up dual booting after they released WSL2.


I’ve had a lot of hope for wsl2, so much so that I’ve been using it as my daily os. However recently some directories under wsl2 are just empty and seem to be unrecoverable. I’ve filed a bug, however I can’t find a way to reproduce it so I don’t expect much motion. So far it’s just random directories are empty and I can’t delete them due to the resource being busy, and unfortunately the Linux utilities to check the locks on the directories error out.

I hope I’m just the exception here, but I’ve had to start re-evaluating what I’m going to do longer term, the short term is easier (push all).


same here, it depends on how they use the OS for. I develop with cross platform language most of the time ( Go, Rust, C++ ), WSL2 with Windows does the job beautifully :)


Funny, I gave up on dual booting after they released Proton.


I gave up Windows10, I mean Spyware10, after I managed to do everything I needed on GNU/Linux.


On a side note, does anyone know if you can use fast boot on a system which dual-boots with grub? I have a motherboard which supports fast-boot, but I understand enabling it makes it difficult to get into the BIOS so I am hesitant about enabling it.


I wonder if windows 10 can/will be able to boot off zfs ? Then we'd finally be able to share a disk and let zfs deal with encryption...


Why would that ever happen? Anyway you can boot a Windows VM off of zfs already.


Why not? There's already support for zfs.


Support in Windows? What support? Windows can't even boot from Microsoft's newer ReFS filesystem.


Obviously not boot support yet (or anytime soon) - but does W10 support booting off non ntfs at all? Fat32/fatx/iso/udf? Just being able to use it for mostmight allow a boot/rest-of-disk split like with Linux?

I don't know, but I think it's an interesting idea. ReactOS on zfs...

https://github.com/openzfsonwindows/ZFSin


Having done this a few times with older releases using Veracrypt and LUKS I wouldn't wish it in anyone. Dist upgrades become nigh impossible.


Is there any way to do this on a dual boot system without having to erase everything and reinstalling both the OSes?


With the recent advances in WSL2, I’d say dual booting is hardly worth the effort anymore. This is assuming you need Windows.


I would like to use Windows as rarely as possible and I think WSL 2 doesn't allow that.


While I agree, that doesn’t mean that dual booting is the solution. My daily is Fedora Workstation and I have a Windows vm for stuff as needed.


I mean you have to run Windows as your host system. Personally I would consider this a major drawback which more than eclipses any convenience gains.


I guess I should have been more verbose in my original comment. Windows is hot garbage, but there hardly exists a good reason for dual booting. Use a Windows VM from Linux if you can, WSL2 from Windows if you must.


Why? It works great. Or is it just personal preference?


Mostly preference. I don't like Windows' approach to telemetry, and intrusive UI patterns.


Which intrusive UI patterns?


Candy Crush ads in the start menu.


Right click. Remove. Done. No more ads.


Why should I have to do that?


Perhaps because you don't like Candy Crush?


No I mean why should my OS (which I paid for) come pre-installed with garbage ware?


I am considering building a PC for programming and am genuinely curious if WSL2 can be considered a replacement for dual booting Linux and Windows? I am currently rocking a dual boot system (with legacy bios) and haven't had any issues so far.


It's pretty close. Big pain point for me still is that running an IDE from Windows and accessing the Linux FS is not quite as snappy as native-Linux (I use IntelliJ). So for now I still dual-boot.

This is apparently being worked on... in the future I will be able to run the IDE inside WSL and it will bridge X11/Wayland to Windows.


If you can try using Visual Studio Code, with their remote plugins. One of the remote plugins is for wsl2 and it works great.

That means that VSC installs language server inside the wsl2 environment, I use it at home for my rust/python development and can't tell the difference.


I gave up on dual-booting a long time ago and 2 OSes fighting over overwriting the MBR every time I wanted to upgrade one or the other. It's just so much easier to maintain 2 machines.


I solved for this by simply giving Windows and Linux a hard drive each. The boot records are completely separate, and neither one tries to read data from the other. (Personally I have no need, my NAS handles any file level syncing, but that's minimal anyway.) Both Windows and the Linux boot are registered in UEFI, so I asked my BIOS to disable the hard drive preference and pop up the boot menu each time, letting me pick which OS to use manually.


> I gave up on dual-booting a long time ago and 2 OSes fighting over overwriting the MBR every time

This is why UEFI was invented, and if You actually read the post, you would know it details how you set this up with UEFI instead of legacy boot and have none of those troubles.


Isn't WSL2 a step backwards from WSL? A thinly veiled VM, vs. proper Linux emulation? What's the performance story and cross-execution (starting Windows programs from Linux side, or vice versa) on WSL2 now?


The interoperability is good except few applications.

File performance is similar to native as long as you store the files in WSL2.

Only issue, I have faced is that port forwarding does not work sometimes and have to restart WSL2 for fix which I solved using a simple script.

Other than that everything has been a smooth sail. Native docker is a huge win in my list.


The randomly assigned ip every reboot is extremely annoying.


I use localhost for accessing stuff inside and outside the WSL2 and it works fine for me.

I never used the private ip. So, I can't say anything.


Dont use swap on laptops with emmc ssd's or with ssd's period. Constant read writes will wear them out quickly. Just get more ram.


This used to be case with consumer SSDs many years ago, nowadays the effect it has on the life span of an SSD is marginal. I had (and still have) my swap file / pagefile on my cheap laptop with 4GB RAM on the SSD it came with for 5 years now and the SSD is still perfectly fine.


Alternatively (or complimentary) to "get more RAM", to prevent system ping south as RAM fills up set vm.swappiness to 0 or 1, and configure an OOMKiller (anyone have thoughts on which is "best"?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: