Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft suggests command line fiddling to get Windows 10 update installed (theregister.com)
241 points by curiousObject 8 months ago | hide | past | favorite | 211 comments



    When installing the update, some users are finding themselves faced with an 0x80070643 error, a generic failure message. Unfortunately, according to Microsoft, "because of an error in the error code handling routine," this might not be the correct error.
Error in the error code code is giving the wrong error code for the error.

I'm curious how often the error code code needs touching that there is an error in there that hasn't been patched long ago. I would imagine it would be a set and forget kind of thing. Unless this error in the error code code has been there for a long time but unnoticed. In which case there might be a errors in the QA of the error code code.


This is why I am extra vigilant, pendantic and strict when writing or reviewing code that handles exception and error paths. More than happy-path code, I want this to be simple, rediculously easy and very loosely coupled. No inheritance, no abstractions, very shallow dependency trees and so on.

Just once too often did a bug in exception handling cause a system to go down - It is often not or hardly covered in tests (Its exceptional after all) and very unpredictable: more than elsewhere do exceptions have the tendency to arise where you never expected them (it's in the definition, really).


Exceptions are by definition where some “unknown unknowns” are going to show up, so yes, it’s good to have simple code with as few assumptions as possible. At the extreme, you get Erlang-style supervision or “crash-first programming” where at the first sign of trouble you just immediately throw away all ephemeral state and start over.


A good reliable system pretty much has error handling every step of the way, down to crash-first programming at the very end.

It’s more possible to recover from errors higher in the system but you can’t predict every problem so you need layers of error handling at increasingly lower levels with different recovery expectations.


Same. I shut down a ticket last week that would create an “final” error handling path that happened to re-invoke half the code paths that could error.


This attitude seems a little silly to me—being "extra vigilant, pedantic and strict" about happy-path code would necessarily imply the same about "exception and error paths". Generally speaking, bugs mostly appear in a binary fashion and not in varying degrees—ie your code either reflect expected behavior or doesn't.


You'd be surprised at how many await's it takes to produce a try/catch. The attitude of being extra vigilant, pedantic, and strict around exception and error paths forces the happy path to be as well defined as it possibly can be given the requirements. It's defensive programming. Assume everything will break in ways you don't know about when writing it. Especially true for async/await code. I don't find it silly. It's like a seasoned sarge telling you not to pull the pin on that baseball object you have in your hand.


> It's defensive programming.

This would necessarily presume applying the same attitude towards the happy path—you're describing something short of that.

Or to put it a different way: defensive programming necessarily implies a skepticism that you even are sure what the happy/error paths are!


Oh you definitely apply the same attitude to happy path at the end after scrutinizing the exception/error path options.


> This is why I am extra vigilant, pendantic and strict when writing or reviewing code that handles exception and error paths.

I realize that this is a different person from you, but that's the attitude I am replying to.


Perhaps the implication is not

"extra vigilant ... [compared to my level of vigilance on the happy path]",

but rather

"extra vigilant ... [compared to other people I work or have worked with]"

The latter is how I read it, because I have definitely worked on teams where the happy path was heavily scrutinised, but error handling was treated as an afterthought, and rarely inspected too closely, leading to anything from error strings copied from another part of the application with names not updated to match the new location to throwing XException but only handling YException, so the XException bubbles up to a much less useful handler.


I don't tend towards expressing personal comparisons "in public" since they seem both inevitable regardless of the situation and disrespectful to bring up as a topic of interest—not to mention potentially exposing you as incompetent yourself. I guess not everyone else feels the same way.

Hell, I'm free now! Everyone I've ever worked with sucks donkey-dick for not coding how I do!


> I don't tend towards expressing personal comparisons "in public" since they seem both inevitable regardless of the situation and disrespectful to bring up as a topic of interest—not to mention potentially exposing you as incompetent yourself. I guess not everyone else feels the same way.

> Hell, I'm free now! Everyone I've ever worked with sucks donkey-dick for not coding how I do!

I read it as "extra vigilant ... [because it's natural for people, including me, to focus on the happy path]".

I didn't read it as putting down other people you've worked with, holding yourself up as some paragon, OR potentially exposing yourself as incompetent (I'm really not sure how it would do that anyway?).

People tend to focus on the happy path. It's good to be aware of that and try to correct it. It doesn't require or imply thinking poorly of other people.


> I don't think it's putting down other people you've worked with, holding yourself up as some paragon, OR belittling your own abilities. People tend to focus on the happy path. It's good to be aware of that and try to correct it. It doesn't require or imply thinking poorly of other people.

My experience definitely counters your own!


Could you elaborate? I've seen this tendency (or an acknowledgement of it) in even the best people I've worked with, ones who are far better programmers than me. I've spent a lot of my career primarily pair programming, so I've had ample opportunity to talk with them about stuff like this explicitly.


What matters is the structure of the program per se. The entire concept of the happy or error path is a cultural phenomenon that requires explicit acknowledgment and rejection to solve (excepting languages that demand error-handling to function)

In my experience, these metaphors are introduced to meet time constraints and produce shitty software.

It's fine to talk about these metaphors as efficiency-accelerants, but they don't produce quality software. Not that there are many companies trying to do that these days.


I think you're wrong on this. How often does software get created without the creator having a primary goal in mind? How often is the main goal handling things going wrong? If I write something to build a chart out of some logs, the first, most basic thing I'm going to evaluate it on is "can it make the chart?", not "how does it handle a malformed log entry?". I will think about these things, but they're inarguably not the first thing (the happy path) I think about when I think about the purpose of the code as a whole.


I mean, how often is quality software made? I agree with you, but I'd argue most software is compromised by shitty incentives. Humans mostly write buggy and incoherent software—this is solved by consensus across disparate incentives. The "primary goal" as you state is often characterized by others as "bias"


> I mean, how often is quality software made?

I have no idea what you mean by quality software, so I can't answer that. Can you give an example? I also have no idea what your alternative framing is. It seems like fundamentally not how people think of software.

> The "primary goal" as you state is often characterized by others as "bias"

...yes, that's the point of I (and I believe the others) have been making. People are generally biased to focus on the happy path.

I'm honestly having trouble figuring out your line of argument from your various comments. Maybe I misunderstood what you said earlier about your experience contradicting mine. Were you actually saying that you really did believe everyone else was worse at programming?


> Were you actually saying that you really did believe everyone else was worse at programming?

What? I'm surely just as vulnerable to giving a shit about the happy path as everyone else. I'm just saying this leads to bad software, and anywhere you see good software produced you see people giving inspection to the code paths they suspected were error-free. The conception that giving extra attention to the error path leads to better software seems to entirely confuse the concept of a flawed program with some kind of finite bulk work that needs to be done.


As a game developer, I definitely distinguish high-risk and low-risk parts of the code base.

There's code that can be allowed to fail, and furthermore, it will eventually fail due to the sheer amount of this code, the development time constraints, the number of possible game states, etc. I don't care that this code rarely fails under some arcane conditions, because this simply causes some button to stop working, some NPC to stop moving, but the game will remain playable. Even if the player notices the bug, they'll just shrug and keep playing. My aim is to make sure that the game recovers and returns to a healthy state after the level/save is reloaded. (Obviously, I'd like to fix/avoid every single possible bug, but it's impossible in practice. You'll have more luck continuously tracking in your head how dangerous the code you're working on is. Also, you rarely have the luxury of being the only programmer on the team. Bugs will happen.)

The other kind of code is the core game system stuff, the low level stuff, the error handling stuff, the memory stuff, the pointer stuff. You must pay special attention while working on this code, because failures will straight up crash the process or bring the game into an irrecoverably broken state (eg. all objects stop updating, stuck in some menu, the player never respawns...). Bugs like these are also highly prioritized by management. My update loop needs to be shiny.

Such is the reality of working on complex systems (or simple object-oriented programs ;))


TBF, I can't name more than a couple dozen sets of game software that I would consider "quality". That shit is generally developed on a deadline and it shows.


Well, yes, deadlines are best practice.


In "happy paths" I except some fancy abstractions, allow large dependency trees, complex state-machines or business-logic. I'm "fine" with deeply nested trees of includes, inheritance, modules and whatnot.

But I insist the "unknown unknowns" are handled extremely simple, predictable and decoupled. Because those happy paths will fail. And then the exception-path must take over and this must be able to handle it predictable.

So no third party error services that I have to call over HTTP (HTTP will fail). No complex logging libraries (dependencies will become outdated). No difficult tracing middleware that has to be configured "just right" (configs will break). And certainly no flows that require a spaghetti-wrangler to follow along.


Dating myself, but I used an OS/2 build that would occasionally report “This Error Message has been deleted”


... but isn't that good?


About a year ago I had a bug assigned that ultimately turned out to be caused by an exception in the logging code that only occurred in the specific case of logging an exception being thrown by the exception handling code. This was internal tooling so I made sure to have some fun with the changeling entry—making it as confusing to parse as possible.


My favorite was "Something Happened" (smaller text) "Something Happened"


Maybe it was caused by a bug in the debugger.


> Error in the error code code is giving the wrong error code for the error

Try MS Teams: https://github.com/MicrosoftDocs/msteams-docs/issues/8539#is...


"But sir, on what does the Windows operating system rest?"

"Why, on the back of an error, of course."

"Um... and on what, may I ask, does the error stand?"

"Ah, but you're clever, young man. Too clever by half. It's errors, all the way down."


It might not even be human error in QA. This might be a code error in the error code code error finding code.


This is how I imagine the error code looks. How else can you even return the wrong error?

```c#

public class FooError {

public Message(){

  //TODO: I copy pasted this from somewhere else. I still need to update to a meaningful message

  return "this is a BarError"

  }
} ```


The code which outputs the error message is probably getting it from GetLastError() or similar (the Windows equivalent to POSIX's errno). But meanwhile, some error handling code has called some library functions (probably to restore the system to a sane state), and one of these library functions accidentally overwrote the global per-thread "last error" variable. That is, the error in the error handling routine would simply be a lack of doing a "saved = GetLastError()" at the start and then a "SetLastError(saved)" at the end.

This kind of mistake is very common when dealing with GetLastError() or errno, since most library functions you might call while handling an error do not overwrite these values... until something in one of them fails (sometimes even a non-fatal error which is handled internally by them) and overwrites that per-thread variable.


Yeah that's almost certainly the issue.

Last year I had to debug a Windows issue that caused the following: we get back a struct from the Windows API that contains an HRESULT style error code, and also a string error message in case of error. Printing the string revealed that it started with the error code in hex, so we just logged the string. As part of a workaround for another Windows bug we had to check the error code for a specific value and do something different.

Eventually we got a report from a Windows machine in the wild where the code seemed to be doing something impossible: the log indicated the error code we looked for was being returned, yet, the code didn't take the conditional path that checked for it. After a lot of head scratching we realised there was only one possibility: the error code printed in the error message and the error code returned in the structure didn't actually match. Sure enough that turned out to be the case. Somehow Microsoft had generated the textual version of the error successfully, and then the machine readable error code had been replaced with something else before it was returned. The fix was to compare against both the HRESULT and the stringified form.

Naturally, this error only occurred on some Windows installs and not others for no obvious reason. The core OS seems to have tech debt issues so severe that it's basically impossible to verify even for app developers - you can do everything right, test it all carefully, and your code will still blow up in the wild in an undocumented fashion because no two Windows installs are quite alike, even (perplexingly) for fresh out-of-the-box installs that appear to be the same version. At this point even Linux is more consistent than Windows 10 is. I can't imagine how painful it must be to work on Windows deployments at Microsoft. The install base has become so inconsistent that they're pretty much guaranteed to trash people's systems with every change they make by now.


I remember one bit of customer code where I needed to pass a BSTR string into a function. I constructed it using CComBSTR without considering that the destructor calls SetLastError. I could see the call failing but the only information I got back was ERROR_SUCCESS.


In C/C++ land, errors are often mapped to const size_t int values. Whether it's an enum or just a list of "codes". In this case, I suspect (like OpenGL) it's just a giant list of codes defined as "extern const uint32_t MY_SUPER_ERROR = 0x0001" and some other part of the code has defined the same value to another error type. Then it's trying to value match and give a human readable value (0x001) but based on the matched value (0x0001 becomes 0x1000) because MY_SUPER_ERROR is 0x1000 in some other part of the code. This happens when people in C/C++ land follow this logic:

Lib Guy 1: "I need an error.h to store all my errors to make it easy for me"

Lib Gal 2: "I need an error.h to store all of my error values so I can just use type names like a boss"

Exe Guru 1: "Which <error.h> is this?"

The other option is that an error was thrown inside error handling code which threw an error and was unhandled, throwing an error generically at the base.


0x80070643 is ERROR_INSTALL_FAILURE, so yeah basically somebody clobbered the error code in some installation routine. It's in winerror.h, according to https://learn.microsoft.com/en-us/windows/win32/debug/system...


“What’s a null?”

“Well, it’s the absence of a value, or 0, or nullptr, or…”

“Forget it, you’re insane.”

- My son.


My guess is that part of the error handling is appending to the logs. Maybe something like (I can't remember C#, so thinking about Java instead):

    try{
      try{
        windows_update_function_that_saves_to_disk();
      }catch(DiskSpaceException e){
        DiskLogger.log(e.printStackTrace()); // itself tries to save to disk
        System.err.println(e.printStackTrace());
      }
    }catch(GenericException e){
      // something went really badly wrong, generic error
    }


The logic in the `catch` block could be complex enough that they got an if/else backwards or something like that. E.g., they could catch an exception and then have the body of the `catch` return six different error messages based on details about the exception, system status, etc.


"Oops, something went wrong"


Wait until we'll start seeing error messages like "I Cannot Fulfill This Request It Goes Against OpenAI Use Policy".

Then we'll know why QA went bad.


> "because of an error in the error code handling routine,"

WinErr:013 Unexpected error - Huh ?

This may be the famous "/* NOTREACHED*/".


Love it. I'm lmao now at such craziness.


I've followed the instructions and understand that they may seem overwhelming for some users. Instead of using command line methods, I opted for Windows Disk Management to shrink the drive's partitions and create the necessary space.

It's surprising that a script hasn't been provided for this process, which suggests the complexity and potential pitfalls involved. This leads me to believe that the absence of a scripted, simple double-click solution is due to the intricate nature of the task. Consequently, I'm not optimistic about a prompt Windows Update fix, as many had hoped for.

However, I anticipate that third-party developers will soon release scripts or programs to automate this process.


I'd expect that "automatic disk and partition changes" have a rather high risk of severe data-loss.


It has, especially for non technical users who cannot double check the "automatic" parameters of the change


It looks like a PowerShell script is available now, but I don't think it's changing the partition size. Has anyone tried this out? https://support.microsoft.com/en-us/topic/kb5034957-updating...


I can only see it using dism to apply the patch but there is no commands to manipulate partition size so if the cause is lack of free space it will fail same as update did.


> a vulnerability that allowed attackers to bypass BitLocker encryption by using the Windows Recovery Environment (WinRE)

I always wondered why the recovery environment seemed to let me have SYSTEM privileges to my auto-decrypted system drive... Thats been the case for years, and you can use it to dump the contents of a machine you've lost the login password to for example.

I guess it wasn't supposed to!


Only if you don't use a TPM PIN and/or have a <2.0 TPM.

Bitlocker using TPM with PIN has no way to know your hard drive key without asking the TPM, which requires the PIN (and has anti-hammering/lockout built in.)


IIRC WinRE asks for a local administrator password before doing any recovery operations, no? Also, this is one of the reasons why "auto unlock" is generally a huge asterisk and PIN/password/network unlock are recommended instead, if possible. (Though they do force you to be a laptop nanny during update times and unlock it on every restart...)


> WinRE asks for a local administrator password before doing any recovery operations, no?

Not for the "Shift+F10" command prompt option....


Are you sure the system drive you see is the real drive and not just the bootable WinRE partition? You technically have two "?:\Windows" roots in that state, one of WinRE that just booted and one of the system you're restoring.


Can Microsoft just bring back a proper QA department instead of leaving it to unpaid "Insiders" who drink the Flavor-Aid and think Microsoft can't code a single thing wrong?


Based on the feedback community I don't think that's a particularly accurate take of Insiders as a whole.

That said it's been ~10 years now since they dumped the traditional QA department (which involved more than just leveraging external user validation) and I honestly can't say Windows is blowing up any more than it was the previous 10 years to that. I'd really like to see some actual measured statistics of e.g. patch issue outages at a health system over the last 20 years though.


Gee... Let me forward that request to our QA depart... Oh. Oops.


As daunting as the fix seems, I appreciate them offering any at all; couple years ago when an update broke ReFS arrays for more than a handful of people, we were never given a solution beyond rolling back, until the update eventually became required and uninstallable, necessitating rebuilding the array from scratch


It's something, sure. But it is still a major fuck up, so them seeing fit to throw us some scraps (for a paid product!) is nothing to appreciate. Maybe I'm reading things into your comment that you didn't mean to say, in which case I apologise.

We should hold Microsoft accountable for borking installs and coming up with a half-assed fix even if it is a complex problem. They're in the lead on the desktop OS market and make billions upon billions of dollars that way. They should do better.


The issue is the recovery partition is missing or not large enough. Since I dont need a recovery partition in my win10 vm, i deleted it after installation and can now no longer update my installation.


I followed the instructions to disable the recovery partition as I don't think I had allocated any space to it and upon reboot I was able to upgrade to Windows 11 and hopefully I didn't create any further problems down the road. I really didn't want to mess around with resizing my system partition as the instructions alluded to as the next step. I guess YMMV but I find it amusing that this random bug I ran into is on the front page or Hacker news.


ok but i dont want win11 :) just keeping an installation around for gaming

the reason its on here is that many was affected i guess


"I use Windows because I don't want to have to go to the command line to fix one simple thing."

Yeah, join the club. Actually you've been in the club. A few years back I had Windows decide that I didn't have permission to access my own user directory anymore, resulting in a system with a glitchy Explorer and taskbar where nothing worked. To fix things I had to drop into an administrator console, create a new user, then move all my old files from the old user account to the new one and change ownership and permissions on them. So no, the idea that Windows is easier because graphical is BS. If the system is going to get into weird failure modes that I need to type commands to démerde myself out of anyway, I'm just going to stick with Linux and NetBSD.


As former windows Dev, I remember getting supper frustrated with their Windows upgrade. You could no longer reliably write to your install folder. They started forcing you to write to the registry, then to data folder, then other places. So you had to split up the install "ten ways" how and where you kept your data. HKEY local machine, HKEY_Current_User, and so on .. then ask for elevation to install. I vaguely remember debugging a bug where I would delete a file in my install folder. But then found out windows was restoring an older version of it in the background. c:\program files was virtualized, and not really a directory anymore.


>You could no longer reliably write to your install folder.

I think that came in with Windows XP and it was a good thing. Letting user programs write into a folder full of executables is terrible for security.


There is a case to be made to not install all updates as soon as they are available.


I do that (wait a bit) on macOS and iOS, but I wasn't able to explain to my Windows that I don't want to update automatically. Especially in the middle of some overnight computing task.


This is a pro feature on windows. But you can enable it on home editions with some power shell commands or by side loading the program used to set the relevant settings on pro editions (group policy editor).


I have pro and it conveniently “forgets” my update settings and will still do it whenever if feels like. Microsoft doesn’t understand that no means no.


Did microsoft change that, or does Windows 10 Home still force users to install the latest updates immediately?


Do they dogfood these updates internally like Google does with Android and Chrome OS?


Had the journalist tryied Linux, would have known that it gives much better error messages than some random looking number.


It literally gave the wrong error message?

> Unfortunately, according to Microsoft, "because of an error in the error code handling routine," this might not be the correct error.


Yes, and it's ironic that there's an error in the error handling routine :D


It's about one-click-solutions what makes Windows a nice-to-use-tool, not the error messages.


Exit code 139. Hm.


That's usually not the full error message, and you can still easily find the number's meaning by opening the man page of wathever piece of software terminated with that exit code.


Likewise, if you knew about err.exe, you'd also know how to be able to get the error message behind the error code.


Depends really. If it's some kind of runtime or VM you pretty much have zero idea what the exit code was. Python, CLR, JVM tend to leave you in this mire.


Yes, but remember that usually nowadays a piece of software also logs something to stderr instead of relying just on exit codes to signal error.


Not necessarily. Most of the 139's I get these days are SIGSEGV related. Obviously if you annoy the MMU at that level it's not going to get given an opportunity to write anything useful to stderr.


Yes, but sometimes useful messages can only be found in dmesg.


dmesg has been there since forever. If anything windows (auto) update has been one of the worst part, esp. when it fails to update... or worse replaces your video/printer driver with whatever it finds. The latter (video) messes up audio for hdmi/displayport in addition of installing bogus drivers.

I'd be tad happier to have apt install/apt update/upgrade instead of windows schedule 'updates'


Is it me or is Microsoft degrading user experience on Windows 10 with constant system breaking updates? It feels that most of them do nothing but slow the system down or change things for the sake of change (some project managers want to show they do something?).

It feels like Microsoft is trying to "convince" users to switch to Windows 11 by making Windows 10 bad.

At the same time Win 10 computers without this UEFI cannot be migrated to Win 11...


once you've paid for win10, you as a user is no longer a revenue but a cost. And you can obviously guess what happens to costs.


But why bother to change UI at all


To make you buy what you currently have as a future feature


Or more likely they don't test for Windows 10 regressions as much.


I wonder what their testing looks like these days. My impression is that they fired their QA and never looked back...

https://news.ycombinator.com/item?id=20557488


i remember reading years ago about the reason windows updates since 8 or 10 were so bad was because MS fired all testers and scrapped their testing hardware and now they are testing virtually in cloud. I guess nothing has changed, still. cost cutting at its best.


>MS fired all testers and scrapped their testing hardware and now they are testing virtually in cloud

Do you have any sources for this? AFAIK, according to the LTT wan show when he was duiscussing his conversation with MS baout Windows sleep issues, he laerned MS still uses plenty of bare metal HW for testing, especailly notebooks and in no way got rid of it.


The status quo was seperate software developers and software developers in test, and nearly equal numbers [1] (I haven't read this book, only the description, but seems authoritative).

The new normal changed around 2014, as described in this Ars Technica article [2]. In order to ship more stuff, more quickly, Microsoft eliminated the developer in test roles, removing the bottleneck of a specific role in charge of quality and hoping to diffuse the responsibility.

This addresses the 'fired all testers' part of your quote. I don't have references on the 'scrapped their testing hardware', but I imagine most testing hardware was maintained by developers in test, and when their positions were eliminated, they may not have had anyone to transfer the hardware to.

[1] https://www.amazon.com/How-We-Test-Software-Microsoft/dp/073...

[2] https://www.amazon.com/How-We-Test-Software-Microsoft/dp/073...


No, i just remember a comment from some former MS dev who knew something about the testing process in MS. I think it might have even been a comment in here actually.


> MS fired all testers and scrapped their testing hardware and now they are testing virtually in cloud.

I'm fairly certain this isn't the case. 1) Microsoft still adds Microsoft drivers for hardware; this implies having hardware and people to work with it. 2) Microsoft has a massive base of corp and gov customers buying hosting, surveillance, etc. Widespread issues with Windows would could affect that.

I'll go out on a limb and guess the fired all testers theory arose when the insider program became widely available to the public.


There was plenty of news coverage of the Windows test team being eliminated at the time of Microsoft's 2014 layoffs:

"Under the new structure, a number of Windows engineers, primarily dedicated testers, will no longer be needed. (I don't know exactly how many testers will be laid off, but hearing it could be a "good chunk," from sources close to the company.) Instead, program managers and development engineers will be taking on new responsibilities, such as testing hypotheses..."

https://www.zdnet.com/article/beyond-12500-former-nokia-empl...

There were also a lot of insider comments from affected SDETs on the notorious (at the time) Mini-Microsoft blog as well: https://minimsft.blogspot.com/2014/07/18000-microsoft-jobs-g...

The burden of testing software was shifted over to developers after the layoffs. If you check Microsoft's job listings, there isn't even a category for software testing positions anymore.


nope. as indicated by a sibling comment, Microsoft used to have parallel SDE and SDE/T tracks and orgs.

They killed the test org completely around 10 years ago because of course developers can just do TDD and ship an operating system with 30 years of legacy applications/hardware ecosystems around it. Or "telemetry will just tell us what to fix"

Of all the mountains of dumb shit this company has done in the past 10 years, this is actually what killed Windows for the people (like myself) who actually used to like Windows.

Edit: To add sources, I worked at Microsoft years ago and still have friends in those orgs.


I haven't liked Windows since Friends was still on the air, but even I must concede... the version shortly before they dumped their testing org, Windows 7, was probably the best version they made.

Windows 9x was extensively user tested using mockups made in frickin' Visual Basic; user feedback was incorporated into the next round of mockups to converge on something that was actually easier to use. It was agile development before that was a buzzword. What's replaced that? Just rolling out whatever UI brainfart the design team came up with into the product and using bitching on X (formerly known as Twitter) to gauge whether to keep it?


McDonald’s methodology. Optimise for volume not flavour and quality with a veneer of marketing bullshit smeared on it.


> Unhelpful error codes, complex fixes ... When did Windows turn into Linux?

Hmm


Btw, I use Arch Windows.


Yup. These are the people society put in charge with keeping us informed. They'll probably need to take jobs are janitors once ai replaces them, as they are prime candidates given how unreliable their writing is.


Why, it's a funny pun against Linux and it's true, so where's the problem with the writing?


That won’t be long now…


The issue with the WinRE environment and Bitlocker bypass exploit has been known since at least last year with a January 2023 KB addressing the issue, but the WinRE environment still needed to be updated manually. Looks like 12 months later if you hadn’t already dealt with it, another issue can arise according to this article?

More information on the fix can be found discussed here: https://old.reddit.com/r/sysadmin/comments/10atdqe/is_bitloc...


FTA: "Showing an unhelpful error message and then requiring a user to delve into the world of the command line to fix things. What is this? Linux?"

What did the author want, a button to push for each solution to every possible exotic error, even critical ones that would require to resize a system partition from a running OS? Seriously!? Of course one solve this through the command line.

Or, if that wasn't what the author meant, then he may have to upgrade his Slackware 4.0 install, because a thing or two have changed in the Linux world.


The beauty of Windows is that you have one-button-push-solutions that just work. If Linux would have that too, maybe even tech-unsavvy-users could handle it, but since Linux is made from freaks for freaks that won't happen. I mean, Linux people can't even see the problems of a not-one-click-solution, but for every normal user it's crystal clear. Command line is for tech users and really for nobody else. Nobody who isn't into tech want's to write in command shells, really, nobody!


Sure, and I wouldn't want it any other way, or I'd just use Windows in the first place. That said, on Linux I still get error messages, not a big number – an actual message that tells me what didn't work so if googling that error message doesn't yield a solution, I can go ahead and try do fix the issue myself.

I still remember from the Windows days when you were unlucky enough to hit a more unusual error code, where googling it didn't immediately result in a dozen forum threads with people posting the same issue and then some dude replying with a fix where you don't even know how they possibly figured that out. If you didn't find a solution googling the error code it basically meant you can try random shit until you accidentally fix it, or reinstall. On Linux I could just ultimately look at the source code of whatever component is broken and track back from where the error is printed.


> even critical ones that would require to resize a system partition from a running OS?

Windows updates can usually do this actually. But only if the recovery partition is after the system partition. On my systems that have been around for a while and had a recovery partition in the front of the drive (as used to be common), there's often a handful of recovery partitions, because the original one was too small, so some update made a new one at the end, and then that one was too small and another update made another one at the end. Kind of messy... and hard to cleanup, I don't think many filesystems are easy to extend or reduce from the front of the partition.


Its typical flamebait The Register is very fond of. I have seen entirely unnecessary references to everything from Brexit to religion on The Register.

it works too. Lots of comments both on their own site and here purely in response to mentioning Linux in the subheading.


Lots of small software companies existed from the 95-XP era to do exactly this, to fix errors such as missing paths or dll issues for major updates and upgrades. Seems like Windows Update is ripe for supervision again.


Well yes. That is Windows (Or Macintosh) 's whole shtick.

People get Windows/Mac for the convenience of 1 click solutions and the understanding that most things will work with one click, or at most a few clicks and a login.

It also comes with the expectations that such massive companies won't completely annihilate their computer with a buggy update.


"These instructions are not for the faint of heart. The first link requires the user to open a command prompt window with administrative privileges. It's downhill from there ..."


sudo cmd.e×e | /bin/exorcise

(edit: the actual extension triggered HN's blocker..)


Why do i get this faulty patch if i'm not even using Bitlocker ?

Also how the heck does something like this happens ? Did MS forgot the size of the WinRE partion which THEY created ?


> Why do i get this faulty patch if i'm not even using Bitlocker ?

You might not be using it now, but you might want to switch it on later.

And more importantly, it prevents an exponential explosion on the amount of different variants of the code. If you allow people to not get this patch, and later another bug is fixed, either you force people to get this patch before getting the next one (leading to N different versions), or you'll have to develop a fix both for people who have and for people who don't have this patch (leading to 2^N different versions).


My win10 kept giving me “bug check” reboot treatment and I had to go win11 reluctantly. Oh well, there goes my vertical taskbar.


Note that there are tools to restore the win10 taskbar in 11. I use startisback, but there are more. Also be sure to use a privacy tweaker!


If only there’s a way to transfer my win oem license which comes with my laptop, I’d totally switch around the host/guest setup and shrink my host windows to a guest vm.


Massgravel's MAS is an option. Not a license transfer but gets you an activated machine.

HN appearances: https://hn.algolia.com/?query=massgravel&type=comment


> meant to address a vulnerability that allowed attackers to bypass BitLocker encryption by using the Windows Recovery Environment (WinRE).

How can you screw up so badly that you even have the ability to bypass the entire reason for a security feature existing?

Sorry to be mean to Microsoft, I am certain there's usability reasons for this, but sincerely: principle of least surprise please.

I'd be incredibly surprised if Linux kept a copy of my unlock keys around somewhere that were available to anyone except me. That is a truly monumental fuck-up.


It's possible partly by the user shotgunning their own foot. WinRE gets auto-decrypt access to your drive because upon setting up BitLocker, the user chose "automatically unlock this drive on startup" instead of a PIN unlock. (Which then utilizes the TPM to auto-unlock so it's not like your keys are on the storage device directly)

If you use any non-auto unlock -- PIN, passphrase, network -- then WinRE can't magically access your data and needs that code to then retrieve a key from the TPM.


> I'd be incredibly surprised if Linux kept a copy of my unlock keys around somewhere that were available to anyone except me

You can set it up to auto unlock:

https://devicetests.com/decrypting-luks-encrypted-filesystem...


Had a machine that was stuck on this update, the manual workaround resolved it.


Hit this Sat on an install. Used MS's PS script; it needed 2 update files (matching the Win prod) to be downloaded first.


Surprised Windows 10 Home still has a command line option.


But it doesn't have BitLocker drive encryption.


Home can read/write previously created Bitlocker protected drives. On systems that have proper modern standby/tpm (mainly laptops) it also supports whole disk encryption.


Haha! Yes, this was a mess!

I was called to help my concerned stepson with "an update error" this past weekend. I'm a software engineer but even I kind of balked at it at first, realizing how very easily an on-the-fly resizing of partitions can go wrong!

Let's take it from the beginning.

He pointed me to the "out of disk space" error code, which isn't in text form to begin with, but only something you find out by googling it. I thought that must be wrong, because the drive is like 30% full. Knowing how this particular error is sometimes thrown inaccurately, I dug deeper and ended up in a long winded forum thread about all sorts of people and skill levels having issues. Novice home users. Advanced users. SERVER ADMINS.

It soon dawned upon me that the disk space error was correct; it was only talking about it from the perspective of a tiny Windows Recovery Environment partition! Or "tiny"... It was like 300 MB large but not enough. It wanted something like another 500 MB now.

People yelled at MS for not pulling the patch and fixing it or bringing an automated post-patch fix.

BUT if this HAS to be done, MS is in a tough spot now. Because it's inadvisible to resize partitions automatically. You usually want backups as it's a high risk operation where power loss or bad input will brick the OS install. I also think how and if it can be resized depends on the layout on the partition table. For example, I had to reduce the end of a former partition to increase the start of the recovery one. Fortunately, I had free space there. I can imagine a Microsoft script wizard would have his heart sink as he'd ship an automated partition resizing script to millions of systems and all their intricacies.

So, I'm not sure how MS will end up fixing this to be honest. My only "fix" would be to rewrite the code so that it don't need a larger Windows RE partition anymore?? OR simply not installing this patch if it's too small. Otherwise, this can only really be "safely" fixed as part of a fresh Windows install with hard drive reformatting and the whole shebang.

I ended up fixing it with a free partition resize tool (MiniTool Partition Wizard). MS advised me to the command line but to hell with that and their ancient DISKPART.EXE and having to input the correct partition # for your partitions when there were like five of them on the drive. I needed something more visual as assistance to hand hold me from making mistakes.

But how on Earth could this even happen? Did MS just kind of forget that they have had previous, smaller Windows RE requirements in the past?


You've skippedd to the tough spot, but before that - actually, why is 300mb not enough?


Windows install ISO is 4.7GB. 300MB-500MB seems in line with what core restore files would require.


I thought they had many computers of various configurations they test on. And wasn't there a release note describing what could happen? So some part of the org knew what was going to happen.


I was under the impression they USED to do this and only have oddly minimal QA anymore on updates

i.e., I make a point to never run windows updates, they've only caused more harm than good post Win7


I had this update issue on a fresh install of Windows 11 at work. Then I went home and had it on my Windows 10 laptop... thank you Microsoft :|


I also got this error... if I understand this correctly, they want a recovery partition of 250 MB... but I already have a recovery partition of 500 MB (acc to disk management, and lists it as 100% free)... so why am i getting this error?

should I increase it by a further 250 MB (for a total of 750 MB), or are they just plain crazy?


The update requires 250 MB of _free_ recovery partition space, not the total size of the partition.


but my 500 MB partition is listed as 100% free by disk management

is it that not accurate? how do i find out how much actually free space i have?


It's the year of the Linux desktop, everyone!


Microsoft WinLinux 2024 Professional for 64bit extended systems Edition


If Microsoft licensed Linux, they would have something like 2*128 SKUs to describe all the different versions


The monkey's paw curls and WSL is what the Year Of Linux On The Desktop turned out to be.


Didn't Bill Gates "kill the command line" in a infamous video when Windows 95 came out?


I think it was DOS he killed. But I get what you mean. When Microsoft was completely dominating the IT landscape during the 2000's command lines was often sneered at by "Microsoft Certified Professionals"


Sure but they released PowerShell in 2006, a few years after he stepped down as CEO.


"When there's a shell, there's a way".


btw. I got this error after installing windows 10pro 22h2. So it’s extremely stupid


Glad to see that Microsoft is making Windows more and more similar to Linux by every passing day :)


I really do not agree with this comparison.

Yes, it is true, that on Linux there are many, many things which are best done through command line fiddling.

But installing a system update, that update borking things, belching up a totally cryptic error message, and then forcing you to clean up the mess through the command line, is nothing something I've experienced on Linux so far. For me, apt updates have always been super reliable - and error messages have mostly been quite specific and easy to google.

So, this specific case seems nothing like Linux to me.


> For me, apt updates have always been super reliable - and error messages have mostly been quite specific and easy to google.

The ocean between. While you or me might appreciate a good error message and handy debugging options, the standard windows user experience is optimised to not confront users with errors at all. Either approach comes with its trade offs and is implemented with varying degrees of success.


Ah yes, the classic "if I can't see the bad thing, it doesn't exist" approach to problem solving.


Different design goals. While there's tension between Windows/Linux (which, interestingly, has over the years partly been released by each adopting pages of the others' playbook) there are a lot of complex or critical systems in the world, where displaying an error message to the user is clearly not a great design choice.


Yes, and some people just want to take their car to the mechanic, while other people want to fix theirs. It's a classic struggle in any area. The big difference though, is if I use my car in a professional capacity, I want to know how to change my own oil, because I'm not an idiot. I want to be able to fix simple issues, because that's just being professionally responsible.

Windows discourages this kind of professional responsibility, hides everything it can that would give you an inkling of what's going on under the hood. Microsoft wants you to be dependent on them for doing ANYTHING out of the ordinary. And of course, when anything out of the ordinary occurs, there is wailing and gnashing of teeth from millions of windows users, billions of dollars lost by their employers, and all the Linux users just stare in disbelief that y'all don't know how to change your own oil. If you use a computer for work, should probably know how it works. Microsoft has intentionally made it difficult, so... Have you tried Linux, the backbone of modern computing, installed on more devices than Windows by at least a factor of two?


Might be a bit of hyperbole.

Microsoft had extensive documentation of nearly every API they support, and some they don't. Comprehensive documentation with notes and caveats. Automatically available from their tools.

Compare, for instance, with Apples almost total non-documentation of anything. Mostly just scraped comments with arguments barely described e.g 'a string'. No notes, no architecture, nothing.


Yes, my comment contained more hyperbole than it should have; I was frustrated at the idea of "hiding information about a problem" as a good design goal, and let that frustration influence my writing. Apologies to jstummbillig for that. Thank you for being the voice of reason in the room, Joe.

Interesting, I've never found Microsoft documentation for Windows thorough enough to reason about a problem, especially when compared to Arch or Gentoo's docs. I always feel like I'm reading the abstract of a paper, with no access to the rest of it. Is there some kind of special access or login requirement for deeper documentation, or am I just spoiled by the great Linux docs?

Apple products are the king of the "you don't own your device" mantra. I just refuse to develop anything for them, because the time expense of trying to guess how to get the OS to comply hasn't been worth the reward. That being said, I've found the documentation for Swift to be extremely good, better than most other languages I work with.


Yeah Microsoft is definitely not much better than a walled garden for much of their technology.

I was privileged(?) to have access to source under license for years, contracting with a company that ported Windows to various industrial platforms. So my view is similarly colored.

Linux is definitely the king, no argument there.


>For me, apt updates have always been super reliable - and error messages have mostly been quite specific and easy to google.

Lmao, come on now.

"Error: unmet dependencies"


"Error: unmet dependencies"

I swear, I haven't run into this.


Ohla' was about to ask where was the lacche' riding on their white horse in order to defend the multinational corporation, welcome to the conversation


Did a fresh install of windows 11 yesterday and winget doesn’t work. Seriously. I haven’t even installed any software yet.


You have to wait for all the back-end updates to finish. Install all the Windows Updates, then open Microsoft Store and go to the Library and wait for it to update even more shit, then you finally will have Terminal and WinGet.

Yeah, I found that one out the annoying way too.


Maybe I was experiencing something else on top of that because I was getting exception codes at first. But then once I started getting "source errors" I decided to check the store and such and that's when things started working more normally.

What's surprising though is that it listed the msstore source as operating normally when I did winget source update. Regardless, kind of crappy this doesn't work out of the box and I have wave dead things not mentioned in any instructions to get this dumb thing working.


My experience (and path that lead to this) was realizing that a fresh install of Windows 11 gave me PowerShell and not Terminal in the Win+X menu.

WinGet would... launch, but provide no output. And of course, it was my first time trying to use WinGet in general...

A few reboots later suddenly Win+X had Terminal and WinGet started working, and on a subsequent reinstall in a VM I spotted that, yeah, apparently all the extra WinGet, Terminal, etc stuff all has to be pushed from the MS Store instead of Windows Update.

Total pain in the ass.


The output turns blue and nothing happens? Update Terminal and then winget will work.


No I was getting "0x8a15000f : Data required by the source is missing" out of the box. Then after that went away it was another error. It might be related to this not being a "local only" account and not a microsoft account.

Ultimately, thinking winget can be used like apt to get a new system up and running is misguided on my part.


You mean 90's Linux?


Are you claiming that using Linux in 2023 will not pretty much require "command line fiddling" to some unavoidable degree (unless you have somebody else do the setup for you and day to day just use what's there)? Because, if so, that would very much not be my experience. Broadly speaking, what advancements am I likely missing out on?


Depends on distro and usage, but once installed Linux does not need command line fiddling. There are GUIs for installing and updating the OS and applications and pretty much everything else an average user does.


In that case I will have to disagree. I run a stock, up to date Ubuntu. Yes, there is a GUI for a lot of things, but I will most certainly run into something that will require me to drop to the CLI before long – which I am fine with, but I am not going to pretend it won't inevitably happen.

My experience is: On Windows, for most enduser use cases (and this absolutely excludes anything dev related), I would expect the user to get by without touching the CLI. In fact, I would assume that any given application will expose all of its functionality only through a graphical UI. For me, on Linux, the opposite expectation applies.


I disagree. On stock Ubuntu, there is no normal situation you'll run into where you must use a terminal. If you decide to do something that requires a terminal, that's a different story. But basically every normal operation you'd do on windows GUI (and more...), you have a GUI option available on Ubuntu; it is the Windows of the Linux world, and it behaves as such.


Cmon my ubuntu stock can't even remember my sound input options, which are reset on each reboot. There's no option for that on the GUI. Also upgrading linux can be a big pain for non-experts. I've been using dualboot windows/linux for a long time, and I simply don't upgrade ubuntu: I'll do a fresh install if I want/need the new version.


also nothing in linux will ever just give some cryptic hex code error or crash report.

It's either crashdump or nice error message. If not I just increase loglevel to find the actual problem.


Yeah, in OpenSUSE I'd just rollback a bad update like this. Automatic snapshotting before and after all package manager operations is very nice.


Hey! Some of us still use Linux the old "hacker command line" way


I use arch btw, and even then with zsh, shell highlighting, annotations, fuzzy finder and vim on steroids, the "hacker way" can be really different from two decades ago.


It doesn't mean that you have to.


It's hard to avoid. I know I can open the GUI file manager to look at a directory, and sometimes I try to force myself to do that, but way too often I quickly open a terminal and type a "ls" command. It's just what I got used to when I first started using computers back in the 1990s, and habits can be hard to break.


Yeah, this isn't Linux. Linux is better than this these days.


I use Linux. You absolutely need to drop down to the command line at times. Upgrading my Kennel, or Graphics driver breaks something like 20% of the time for me.


Strange.

I've been on Linux some 28 years now, and it used to be like that¹. But today? I haven't had a kernel or graphics driver update failing on me for years. Maybe 10+ years? Granted, my Linux is the most boring, hardly-configged Unbuntu LTS. Boring hardware. Boring drivers. Everything optimized to "Just Work" - My machine pays my bills and if I'm down for a day because I fiddled around with some fancy custom trackpad module, or special Bluetooth whatevers, it's costing me actual money.

I'm certainly in my terminal 80% of the time (developing in nvim etc), so can't say much about how much it is needed by someone who doesn't know or use that terminal.

¹ My personal worst was at a conference doing a presentation for a 200+ audience having to recompile X, while entertaining the audience, because the beamer wasn't found and X crashed on it.


What hardware? Your experience with Linux can be very different depending on that. For example, my laptop's WiFi refuses to work properly on Linux, so I'm forced to use Windows.

Edit: oh, and not to mention graphics (Nvidia), multiple monitors and HiDPI, especially with fractional scaling and different multiples per display. HiDPI on Linux doesn't work nearly as well as Windows and macOS.


It's funny, I can say the exact same thing in reverse. My two latest laptops are boring: integrated graphics, intel wifi, run-of-the-mill hp "enterprise" fare. One running a zen3, the other an 11th gen intel.

On the AMD, I couldn't use the webcam for a good six months under Windows. It wouldn't detect some part of the USB tree.

On the Intel, for a good year, I couldn't get 4k@60Hz over DP via the HP dock. Then at some point, installing Intel drivers fixed the issue, but Windows would insist on "updating" to the older, borken version every so often. Now Windows also has the correct drivers. A different dock still doesn't work. Then there's the fact that the Windows installation (11 22h2) doesn't support the touchpad, nor the wifi card. Bonus points for the default install insisting on connecting to the internet for the online account thing.

Linux had 0 issues on both since the day I got them.

As for HiDPI, yeah, the Linux story, at least with X11, is non-existent if you want multiple HiDPI settings. But Windows is a crap shoot, too. It's easy to get a blurry mess: just connect and disconnect an external screen, and even the freaking windows 11 start menu is borked. It looks fine when you open it, but start typing something and enjoy the blurriness. Apps will be stuck with either enormous or tiny text. The context menus of the systray icons will appear all over the place.

Some apps manage to combine everything: tiny text, yet blurry, and displayed in the middle of the screen. To name and shame: Fortinet VPN client.


My current machine is a "clevo". Some whitelabel thing that I bought from a company that delivers Laptops with linux preïnstalled. Intel i7, intel iRIS, etc. It all "just works".

I'm not trying to argue that "Linux Just Works" or that "you won't run into hardware issues on Linux". I'm arguing that by choosing "boring" hardware, and "a boring LTS from a large distro" you won't.


If you know you're going to use Linux before buying a machine, you can generally save yourself all the trouble by simply avoiding broadcom and nvidia. Those two have been the source of the most trouble for years.

Intel for graphics and wifi is generally the safest choice. I pick laptops with those and have great success.


A decade ago I stopped having problems with both BCM and NVidia devices in Linux. Last year I purchased a very recent wifi dongle and had to manually install broadcom drivers, but other than that it's been smooth sailing with no issues.


I bet you use stable graphics drivers on older graphics cards and the person your responding to uses newer graphics cards and quickly updates to the latest drivers.


Yes. Hence "boring".


That's way higher than my experience.

I had a kernel update break a memory management API that JavaScript engines use about a year ago, and then before that my last kernel induced update breakage was in like 2012 with fglrx.

That broken kernel update also would not have made it into more conservative distros, as the change was rolled back 5 days later.

Meanwhile I have previously experienced the windows issue in this topic in like 2019. The windows 7 installer created even smaller reserved partitions than the windows 10 one (100 Vs 500 mb iirc), so users with systems upgraded from 7 to 10 would have seen this sooner.

And for completeness sake I've also experienced OS X fail at updating to High Sierra as that updater didn't like something about the way my employers provisioning software had set up the partition layout some years earlier.


nVidia on Linux is particularly nasty in that aspect.

I've administrated a fleet of ~100 Ubuntu devices that used nVidia for some AI stuff - unattended-upgrades disabled and all that - and yet graphics drivers broke in regular intervals, every couple of months. Apparently, nVidia drivers have some system in place that can update drivers on its own. The only solution was to uninstall all nVidia drivers, upgrade all packages, then re-install nVidia drivers.


I assume that you’re an Nvidia user. This is most likely the cause of your problem. Linux just works on Intel or AMD.


nVidia, I would guess?

I realise it doesn't help now, but in the future you may want to avoid hardware that's specifically hostile to Linux.

there's a reason I chose my latest laptop specifically because it was Ryzen.


I think the point is that the Linux command line is more functional and easier to navigate than the Windows command prompt


I don't have anything break on the graphical stack on my computers since 2007.

But somehow Debian keeps uninstalling the display manager. One would think this is the easiest problem in the world to avoid, but they avoid every hard problem, and this one passes by.


What distro do you use?


Of course. Things are not perfect, but most of the popular (GNU/)Linux distributions gone a long way from the old days of requiring terminal fixes.

I just meant it as a lighthearted joke.


On the current LTS version of Ubuntu, I had to drop down to the terminal to adjust the mousewheel's scroll speed.


Having to use such a terrible hack to change scroll speed on GNOME is my least favorite thing about it.


Ubuntu is doing wrong a lot of things, not just this one.


Except for if you want to use dual monitors, etc. etc.

Every time I've tried Linux, I've had to dive into the terminal and paste commands from the internet to get normal things to work.


Only two monitors?

I've no problem with more than that. I don't know what hardware you're using, but that's atypical.


Two monitors with different DPI. Couldn't get them to work well without running a weird xrandr script on every boot with some "1.9999999999...." setting.


Without Wayland I'm pretty sure this is still the case.


I use Wayland. Perhaps that's your issue?


I moved to Wayland mostly, but gaming performance is worse. Mostly latency and 1% fps lows.


"Works for me" (r)(tm)


Was this in the past decade or before?


A couple of years ago.


I've never had an update issue on a Linux device. I update whenever I like, and have never been forced to. Occasionally some items fail due to dependency issues, the error messages are very descriptive, I handle the dependencies and move on with the update. Failed updates are a uniquely Windows issue in my experience, had something like 10-15 of them in my lifetime.


Your experience differs from mine. If I only count major incidents, I've had at least two occasions where a Linux operating system update completely hosed my system. One was a Rackspace VPS I never managed to recover, I had to use their management tools to download a binary image of the unbootable disk and fuss around to mount it and get my files off so I could build a new server.

I'm very hesitant to touch Linux updates, but I am occasionally "forced to" when I need to run a new piece of software that ends up being like pulling on a loose string on a sweater.


That's quite strange. What distro? For a while I was managing several hundred Linux devices remotely, so I've done thousands of updates with no issues. Linux has been my daily driver for 15 years, and never had an update issue there (that I didn't create myself). Specifically the thing I've NEVER had an issue with was a standard system update. Apt upgrade, pacman or yay -SYU, things of that nature. Had plenty of problems when trying to do weird shit, but I've also learned how to solve those problems. I've never borked an important system so bad it became unrecoverable. Some distros are prone to update failure because they always give you bleeding edge releases. I'd suggest running Debian, a ridiculously stable "fire and forget" system. Many rolling distros, like Arch, I'd say to update every couple of days to avoid dependency chain problems. But again, none of this is difficult beyond having to read some documentation, and none of this should even be able to cause system loss. Did you do something else weird that you forgot to mention?


In answer to the question, no, I did not forget to mention that I did anything "weird". It was apt-get upgrade.

I'm not going to have an argument about my personal experience. You can find plenty of others in the comments who have had similar problems. I enjoyed the response, as I've been joking for 20+ years that 95% of the time Linux advocates will tell anyone running into issues "you're using the wrong distro".


Don't forget to run dism /online /cleanup-image /restorehealth ! /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: