It amazes me that Microsoft haven't replaced the Registry with a simple directory structure, not that it would help for this particular bug, but it would surely be an improvement. I maintain a library for accessing the registry from Linux (https://github.com/libguestfs/hivex) and after writing it I also wrote this screed about how it sucks in just about every way possible:
Actually you can use the Windows Projected File System to project the registry into the file system, making registry keys and values appear as files and directories.
Something is still translating the virtual files back and forth to the half-arsed hive format. Would recommend reading the link I posted since I have actually reverse-engineered the hive format.
Certainly there is a lot of legacy with the registry, but how would any of these issues be improved by moving to a file based config? All these issues could still exist under that model, and there would be new issues too.
Like for example, you already point out how the type system in the registry is very limited. But isn't the filesystem even worse? Everything there is binary blobs with no types at all. So how does that improve things?
It seems like your complains don't really have to do with the "directory" structure of the registry much, so I don't think moving to a file based model would really change anything. You'd just end up with the same legacy issues, but spread across more files.
Finally, AppData wasn't introduced with Vista, but rather it's always been there if applications need to store file-based data rather than individual configuration values. That is not a new or improved way of doing things as you seem to imply in the post.
The problem with hive is that the type has to be set correctly yet several types have no established meaning. For example it's totally random whether a number will be stored in binary with type DWORD or stored as a string (with who knows what type and encoding). However store it in a different way when writing to the registry and Windows or whatever app wants to read that field will break. In a way it's worse than if it wasn't present at all.
NTFS specifically has file forks ("Alternate Data Streams") and I guess you could use those to store a type, although whether using forks would be a good idea or not is up for debate.
This seems like the same kind of argument as saying that because JSON doesn't support the number formats I like, it would be better to use a notation that doesn't support types at all. Well, I think the reality has shown that in fact having some support for types sometimes is the more pragmatic way of doing things.
If those types aren't enough for your use case, then you will be forced to roll your own types in some binary/string data anyway, so it seems like it's strictly more work if you just always force everyone to roll their own.
And even then you still end up with the possibility of people using the wrong syntax for your hand-rolled types, like not quoting values that are supposed to be strings or quoting values that are supposed to be numbers.
Besides, wouldn't it be easier to fix this just by adding some more types, or deprecating everything except REG_SZ or something? What's the advantage of moving to a directory based model?
Don't see how this problem would be included in files. For every setting in the registry, there's a piece of software (and ultimately, a group of people behind it) that claims ownership to it, and determines the correct type for it.
File-based configs on Linux have the same problem anyway. The semantics of any config file you find are defined by the application that's consuming it. Any two config files that superficially seem to be using the same format, may in fact use a completely different one - and you'll never know, up until you edit one and it blows up in your face, because it cannot parse empty lines or #-comments, or escape characters, or negative values, or values larger than 2^16, or...
Whether altering Windows registry or a Linux config file, you cannot make a correct modification without knowing what the owners of the modified settings expect.
> It amazes me that Microsoft haven't replaced the Registry
how does this amaze anyone? how do people think backwards compatibility works?
Microsoft, supporting Windows, promises to make every effort to maintain backwards compatibility wherever possible so that programs compiled for, say, Windows 95 will run unmodified on Windows 10.
Not every program from 20+ years ago runs, but a lot do! That's a very hard thing to do if you wish to continue to advance the technology you use in your operating system. Apple doesn't even try.
Microsoft have taken steps to break backwards compatibility a few times in the name of progress and every time I talk to people during those transition periods, it is a 50/50 split between people who don't know that they've been given a decade of notice and now their "tried & true" software development paradigm doesn't work anymore, and people who are angry because most of the old ways are still supported.
The registry wasn't even supposed to be what it is today. it was a small stop-gap thing to stand in place while a better solution was developed. Developers discovered it, started using it, and now Microsoft has to support it. Of course it's rubbish; metaphorically, it's a piece of a whiteboard used as a doorstop until the real doorstop is delivered, except for some reason people started using it for important stuff and now everyone needs it.
>how does this amaze anyone? how do people think backwards compatibility works?
That doesn't have anything with backwards compatibility. Nothing forces MS to stick to old ad-hoc memory dump format. Neither there is anything that would suggest registry is deprecated, new Windows components keep using it and adding piles of junk into it.
not everyone uses the APIs to interact with the registry. some manipulate the file itself. it's not supposed to be possible, but it is, and people do it. you have to keep the registry itself to keep backwards compatibility.
if everyone used the API, then yes you're correct. in my opinion, Microsoft should do what you stated, but they don't want to, out of fear of breaking backwards compatibility for people who do the wrong thing. their needs are just as valid as yours or mine.
Most of the actual technical issues you list have more to do with it being extended for the last 30 years in a backwards compatible way than anything to do with it being a hierarchical db instead of a filesystem.
I still see it as a file system, very similar to NTFS (similar in the sense of having similar features), apart the (recent) project just mentioned (ProjFS) there existed a file system like driver for it, only for the record:
It probably seems similar because file systems are typically classified as a type of hierarchical db themselves. That being said "I can represent it with a file in a filesystem" is different from "it is a filesystem" in posix (nearly) everything is accessible through the filesystem, even network sockets, it doesn't mean everything's canonical representation is a filesystem it just means it's mappable.
Regardless the point wasn't "a filesystem couldn't represent a rewritten registry" it was that the registry is actually a database today (whether viewed as a file-system like db by the reader or hierarchical db it is listed as) and the rest of the technical problems have to do with it being 30 years old and not rewritten not that it wasn't written with a file system representation as primary view in the first place.
>This misses the point: the Registry is a filesystem. Sure it’s stored in a file, but so is ext3 if you choose to store it in a loopback mount. The Registry binary format has all the aspects of a filesystem: things corresponding to directories, inodes, extended attributes etc.
> The major difference is that this Registry filesystem format is half-arsed. The format is badly constructed, fragile, endian-specific, underspecified and slow.
Anyway, file systems and databases are essentially similar, the point revolves more around the poor implementation of the Registry (whatever it is).
I think everyone is in agreement it's bad, as I said:
> Most of the actual technical issues you list have more to do with it being extended for the last 30 years in a backwards compatible way than anything to do with it being a hierarchical db instead of a filesystem.
My first line about it being a database was about point 7 in the same link:
> Back to point 1, the Registry is a half-assed, poor quality implementation of a filesystem. Importantly, it’s not a database. It should be a database!
Technically a file system is just a special database. I think a better formulation of the authors point would be "the registry is a lot like a file system, even though a more traditional database approach or fully embracing it as a file system would have probably worked out better".
Also, they would have been able to at least improve the on-disk format with a major version; I highly doubt that the registry itself is backwards-compatible anyway and there are probably very few programs that access it directly.
That's a really good take on what the author was going for, I appreciate the take! I still disagree that it starting out as a filesystem or database has anything to do with why it's so crap 30 years later but it gets to the crux of the topic much quicker.
With how tightly the APIs for accessing the registry are coupled with the model and encodings of the registry, particularly the driver APIs for it, I don't think it would have been so easy to just swap out the back end without breaking something though (which Windows avoids like the plague) but maybe doable by someone more optimistic than me :). The real "rewrite" was the push for Universal Windows apps using the .NET platform which stores everything for the app in XML files and shadow directories instead of the registry. Of course that didn't take over quite like they hoped so they ended up back with using the registry they were trying to leave 10 years later.
Yes but a filesystem is also a hierarchical database.
A filesystem solves these issues specifically because it avoids reimplementation. As the registry has been extended as you say it approaches parity with filesystem functionality, but on a parallel track.
At a high level, avoiding multiple implementations of similar metaphors is ideal in terms of security. Reuse what you have.
I'd agree a filesystem is also a type of hierarchical database but the author doesn't think so:
"Back to point 1, the Registry is a half-assed, poor quality implementation of a filesystem. Importantly, it’s not a database. It should be a database!"
These are the kinds of categorizations that people can go nuts over. Rather than get too hung up on words I'd say that whatever this is, it can effectively be represented by a filesystem and therefore it should be as a matter of general architecture and security principle.
I'm actually with the author that if it were going to be rewritten a freshly written columnar database would be way more efficient than representing it as a filesystem but that either would be better than what we have after 30 years. I just don't think "it wasn't a filesystem originally" has much to do with why it's so crap now. Similar case: posix specifies network sockets be accessed as files/filesystems (as most everything in posix is) but nobody actually used that representation because it's inefficient even though it's the standard and easily mappable to files/filesystems. Well I think Solaris actually allows both but the point stands.
Sorry, I'm unfamiliar with what you mean by "network sockets be accessed as files." Do you mean unix domain sockets? These are in fact commonly used and they're certainly no less efficient (more efficient in many ways, in fact).
UDS are interfaced with via the same berkeley sockets api, not via the filesystem api. Have you ever written applications that use them?
I don't mean unix domain sockets, those are known as IPC sockets. The berkeley sockets API you are familiar is actually exactly what I was talking about. It does offer both types of sockets (the other being network sockets as I originally mentioned) but it uses handles in an abstract namespace not files in a filesystem (e.g. in Linux it's still a FD but it doesn't map to an actual file on a filesystem it's just a unique handle in its own namespace).
What I was referring to were things like /dev/tcp/ and /dev/udp you'll find on Solaris (or emualated via bash on most systems) which are actual filesystem paths instead of handles in abstract namespace. A usage example of this comparable to binding to a socket with the BSD API to udp://localhost:2048 would be "echo "example" > /dev/udp/localhost/2048". The actual I/O is through the standard file/filesystem interface just like /dev/random. It's not the best for network sockets though so they tend to get a raw handle in every modern OS, even if it does mean rebuilding the wheel on some other things.
Network sockets are the canonical example of "not everything in Unix is a file". "Everything is a FD" is true but "everything is a handle" is true on any OS design, the uniqueness that things like ram and disks are just files under / did not hold true with networking.
And yes I have written plenty of apps with ipc sockets and network sockets and raw sockets and even underlying device access (for things like custom Ethernet packets). I'm in networking by profession.
I think there's some confusion about how the sockets api works, let me see if I can clear this up.
Posix does not specify that network sockets should be accessed by file paths. It's possible to do so, but unspecified by the standard.
Sockets produced by socket(2) are regular old file descriptors, just as created by open(2) on a file path, or any other descriptor generating syscall like pipe(2) or epoll_create(2). There is no separate representation among any of these -- they are all just file descriptors. There are many, many ways to create descriptors and many aren't associated with a filesystem. There's no efficiency issue here, nor is there a divergence from a consistent pattern.
If you like, you can use fchmod(2) on a descriptor generated by socket(2) and change its permissions. You can track it by its inode. It doesn't matter that the descriptor is not linked to a filesystem, any more than for a similar descriptor created by pipe(2). They all have the same functionality and fit within the same consistent metaphor. When you run grep | grep, the pipe descriptor has permissions, mtime, ctime, atime and the rest. Everything just works.
It's trivial to write a filesystem to expose descriptors, in fact /proc does this already for all descriptor tables across all processes. There's no rebuilding of any wheel - the point of commonality is the "struct file" in linux/fs.h.
There's no such thing as a "raw handle" here, btw. That phrase has no meaning.
Thanks for giving detailed points instead of just asking if I've used sockets before - I think I see the main divergence as a result. First though I noted a big mistake on my part: I referenced the wrong UNIX standard originally. This is a huge error on my part, I meant to reference TLI (later standardized XTI, the "competitor" to BSD sockets at the time) not POSIX as what defined /dev/udp, /dev/tcp, and the APIs to access it instead of BSD/POSIX style socket APIs e.g. 't_open("/dev/udp", O_RDWR);'. My apologies I'm sure I was completely misdirecting a lot of the conversation and causing a lot of confusion with that error.
For where the main divergence in what we are each talking about though when I originally said:
> Similar case: posix specifies network sockets be accessed as files/filesystems (as most everything in posix is) but nobody actually used that representation because it's inefficient even though it's the standard and easily mappable to files/filesystems.
I was talking about literally exposing networking through the filesystem by mapping the construct to files and paths as that's what the author's registry tool actually does and what the author was proposing Windows should do in a rewrite - not whether or not sockets can be backed by the preexisting FD handle in an arbitrary namespace using custom socket functions to manage the socket efficiently. As a result I was trying to explain to you why BSD-type sockets don't use a literal filesystem mapping even if they have an FD and you were trying to explain to me how they were still backed by a FD even if it's not in the filesystem (i.e. describing the same API from opposite ends). I agree fully with your take it's a standard FD which has no performance concerns once created and can be treated as such but looking back I think I tried quite hard to point out I was talking about literal files/filesystems mapping not the FD handle so I'm not sure where the split came from... perhaps normally it"everything is a file" vs "everything is a FD" not a big distinction but this case just happened to be about a literal filesystem mapping not whether or not it would end up using a FD.
Also to note when I talk about "inefficient" I don't mean "slow to compute" (after all it ends up a FD once opened, as noted heavily at this point) it's the interface which becomes inefficient (which is what the registry article was dealing with). Even though you call it trivial as in "trivial to expose a mapping", which there is no argument is trivial BSD sockets offered much more straightforward and simple moldability around internet protocol network socket concepts than the filesystem approach by the author/TLI's approach which is a big reason BSD sockets won out. The "rebuilding of the wheel" are that the BSD socket API defines functions fit to purpose instead of molded around traditional file API naming and structure like TLI/XTI did.
In the section "It's not the best for network sockets though so they tend to get a raw handle" 'raw' was meant as another adjective to point out it was just a non-filesystem mapped reference (howso depends on the OS, in *nix still a FD in others not it doesn't really matter though it's just a ref) per the prior sentence not meant to be taken that 'raw handle' was a proper noun describing a different type of handle definition you'll find in the source code.
I appreciate the time, at the very least I'm sure I'll never make the error of conflating TLI with POSIX again and at the most I may have solidified some internals I don't get to think about every day!
No thanks: the registry is a truly huge simple key/value store, which is something files-in-dirs are terrible for because almost every single one of them would take up a full block on disk instead of the fraction of a block they actually need.
A better solution would be a simple database (like sqlite3) but then the immediate counter-argument is "okay, so we're done: it's already a simple database", because the registry hive is literally a file-backed database in the same vein as sqlite =)
The Windows registry is not a "simple key/value store" by a long shot. It is hierarchical, there are many different types of value, and there's a complex system of security attributes. These are simple facts.
You're right that a file per value would take a whole block on disk given the way some filesystems are currently implemented, but that's not an immutable feature of all filesystems - some Unix filesystems store small files in the inode. A real database is possible, but also the registry must be available very early in Windows boot (actually it's used by the bootloader, but also by the critical device database) so you'd want something that's at least easy to read with a smallish amount of code.
I feel this mattered much more in the mid 90's when 4GB disks were common in PCs, but with today's modern storage sizes this is trivial. Besides, the NTFS MFT already stores small files directly in the indexes.
I imagine that the registry is optimized for many small values (eg a DWORD - 4 bytes). Most filesystems wouldn't be very efficient with tons of 4 byte files.
Just naively translating the registry into a NTFS directory structure would require 1kb per value, simply because that's the size of a file record (NTFS already has an optimization to store small files directly in the file record if it fits in next to all the attributes and ACLs).
Also the Windows Filesystem driver stack is not very efficient for accessing many small files. It's built for flexibility and security, not speed.
A registryfs would be. The data structures underpinning access would not need to change.
The importance of using a filesystem interface is reuse of the access control mechanisms and filesystem API. It would avoid the type of bug above, due to nesting a hierarchical permissioned structure inside a file.
Implementing the registry APIs, but backed by a regular filesystem (as Wine does) would be the sensible thing for Windows to do. (I looked at the source of Wine just now and I'm fairly sure nowhere does it process hive files.)
PowerShell exposes the hives as a directory structure, and has for a decade or more. just type "HKLM:" or whatever hive you want and start using "cd" and "dir" all you want.
The guy who implemented that really did a disservice to the filesystem metaphor, though. Instead of making values analogous to files, they're properties of registry keys, so instead of Get/Set-Content, Get-ChildItem, etc. you need to do some gymnastics with Get/Set-ItemProperty to work with them. For example, if you want to find a registry value with a particular name, you can't just do 'dir -rec SomeValueName' to find it like you can on the filesystem provider.
well, the registry has types. files are raw binary data that is almost entirely untyped. how could they possibly enforce typed data without diverging from the filesystem metaphor?
Cmdlets can have provider-specific parameters, so in this case I would add a registry-provider-specific data type parameter with sensible default behavior to New-Item and Set-Content.
For example...
Set-Content hklm:\software\xyz\abc -value 1
...could create a DWORD value by default based on the type of the value argument, while adding '-DataType String' would enable creating a string value.
Files are raw binary data, but a (BTW unreliable) method to understand what they contain has been in use since years: file extensions.
I can see no reason why in a filesystem-like representation of the Registry you cannot have a value.dwd (which is a DWORD), a value.bin (which is a BINary), value.esz, etc., or at least that is what I would use.
>I'll take all the side-channels I can get though. These "exploits" are really useful for regaining control over my own PC.
Not really? What does this exploit let you do that you couldn't already do with a local administrator account? Or are you making the general argument that "EoP exploits are features because they allow you to jailbreak your device"?
>Just yesterday I learned how to Run-As TrustedInstaller, and that let me remove a lot of unwanted bullshit on my windows 10 install.
They're not really comparable. You need admin to do it, which means you already crossed the security boundary[1]. This is in contrast to this exploit which allows you to cross a security boundary.
> > What does this exploit let you do that you couldn't already do with a local administrator account
>There are some things that users in Administrators group still can't do. Hence the need for TrustedInstaller perms.
By "this exploit" I was referring to the exploit mentioned in the article, not whatever gp did to get trustedinstaller permissions. As far as I know I don't see why you'd need access to the SAM file to give yourself trustedinstaller permissions. You can do that yourself if you're administrator.
Also, from a security point of view there isn't much that administrators can't do. You're right that they can't directly delete certain files, but they can take ownership of any file they want and adjust the ACLs to give them the required permissions. I don't think is some sort of EoP/exploit/hack, but rather protection against accidental deletions (eg. https://news.ycombinator.com/item?id=23054506)
>" for regaining control over my own PC.
Just yesterday I learned how to Run-As TrustedInstaller, and that let me remove a lot of unwanted bullshit on my windows 10 install."
I understand Linux, Mac, FreeBSD, Magic-Pony-OS is not everyone's cup of tea or they might not be in a position to choose their OS (Work etc)
But DAMN that quote above is really showing me how bad it is out there ! Sure it can/does happen on other OS as well, but I'm betting Windows is the leader in "my-pc-is-not-my-pc-anymore" :/
I've been spending the last 48 hours strongly pondering Linux as a daily driver. If it wasn't for my crippling visual studio addiction, I'd probably be able to swap all my PCs over, with the exception of the one bastard stepchild win10 that I will keep in the closet for when BF2042 is released. Virtualization is another option that I am investigating actively now.
I could even see the path for getting our product off the Windows platform and onto Linux (while still using Microsoft's dotnet toolchain). There are only 2 DLLs keeping us locked to Windows and I have a very solid hypothetical answer for both.
All of this is so depressing because it doesn't have to be this way. A few small changes to the OS (that would incur negligible impact to Microsoft's cashflow or margins) could mean life changing improvements in the user experience.
If profit must be obtained, then Microsoft should consider a "hacker" build of windows that starts as a bare-ass powershell prompt that you have to tack on what you want to use. I'd pay a fucking premium. Microsoft, are you out there? Charge me $1000. I swear I'll pay it if you promise to not shove updates, telemetry, defender or cortana down my throat ever again.
> with the exception of the one bastard stepchild win10 that I will keep in the closet for when BF2042 is released
For what it's worth, all recent Battlefield games run flawlessly through Proton, including multiplayer with anti-cheat, D3D12, and soon (if not already), ray tracing. This includes at least BF:BC2, BF3, BF4, BF1, and BFV. There's no reason to think BF2042 will be any different.
I've found VSCode and dotnet 5/core be amazingly liberating from the the slow bloated mess that is Visual Studio and the old .NET Framework. This is the way it should have always been, but I'm happy we finally got here.
No more need for VS really or any other proprietary bloatware.
First thing I do on new machine is `choco install vscode` then it synces my extensions and I am ready to roll. As extra benefit `code` is usable ASAP in CLI and I can pipe anything to it.
Is LTSC still offered? That was a pretty minimal (though still GUI) install last time I tried it. Also I think it only gets security updates, and only when you initiate the update process.
> Microsoft, are you out there? Charge me $1000. I swear I'll pay it if you promise to not shove updates, telemetry, defender or cortana down my throat ever again.
Count me in too. Give me minimal install then shut up and take my money.
Man i feel your pain ! Was there once myself ! Company-focused Win-products :/ But hte last 10 years I've been lucky no such requirements ! It really is nice :)
My entire team uses VS and I have no problems loading the same projects in vscode and being equally productive (or better). Had to fix few quirks here and there before solution would load but nothing too complicated.
IMO if you expand PC to cover mobile computing, the real tragedy is iPhone. No sideloading, very restrictive app store policies, and no custom OSes at all. At least with a Windows desktop or laptop, you can run Linux or one of the other actually free OSes. Modern MacOS is also pretty unfriendly for developers and power users, but at least Apple is somewhat aligned with users on privacy and security, unlike Microsoft.
> Modern MacOS is also pretty unfriendly for developers and power users.
It has become somewhat unfriendly, but I really appreciate that you can still do whatever you want.
To run self-signed apps, run `sudo spctl --master-disable`
To turn off System Integrity Protection, run `csrutil --disable` from recovery mode.
To modify the root filesystem, do all of the above and run `csrutil authenticated-root disable` from recovery mode.
To disable library validation, do all of the above and run `sudo defaults write /Library/Preferences/com.apple.security.libraryvalidation.plist DisableLibraryValidation -bool true`
To disable AMFI, do all of the above and add the boot argument amfi_get_out_of_my_way=0x1
(Some steps may be a bit different on Apple Silicon Macs, I don't own any so I'm not as familiar.)
---
You now have the same privileges Apple does. You can grant yourself whatever entitlements you like, inject your own code into any process, load your own kernel extensions, or just replace the whole kernel with your custom build of XNU.
I actually think a decent chunk of macOS's perceived "unfriendliness" comes from Mac users being less willing to hack around than users of other OSs. The common refrain in Mac circles seems to be that System Integrity Protection should never be switched off under any circumstances. I agree, if you're a normal user—but if you're not, and the handcuffs are annoying you, just unlock them already. (But do leave everything else in place until such a time as it presents a roadblock.)
Also, method swizzling in Objective-C is fun, try it!
There’s a safer way to run self-signed software on macOS, for anyone that prefers not to do the master disable. First, try to run the program. When it fails, open Settings.app and go to the security section. You’ll find the most recently blocked program name mentioned and an Allow button that will remove the block. Then, you can run the program. You need to do this only once per program.
Right click open. Fail. Right click open again, hit okay, and it will succeed. It remembers your decision. This has been the magic incantation since signing was introduced.
I’m so tired of seeing folks parroting no sideloading on iOS. That’s not been true for a long time. Yes, the conditions of side loading (needs a free developer account, must have app signing refreshed weekly, etc) might not be palatable for your taste (which I’d generally agree), but to say it’s not possible to sideload apps on a stock iOS device is just wrong.
IMO, saying "no side-loading" is as good as correct, and getting technical about it just creates confusion and muddies the waters. Unless you're paying $99 per year for a developer account, what little sideloading Apple offers is completely useless for anything but limited testing. Who wants to reinstall an app they actually use every seven days?
The semi-exception is Altstore, which is a fantastic project... but it's a major hack which sometimes breaks, and which Apple is liable to kill at any time. You also need to keep a server running on a PC or Mac on your wifi network, which isn't workable in a lot of situations.
I mean, my iPhone can run unsandboxed sideloaded apps, because it's jailbroken. But I wouldn't say that Apple allows third-party unsandboxed apps.
I don't see how you can say with a straight face that getting technical just creates confusion and muddled waters when side-loading is already something that mainly technical users do. It just seems like a lazy way to dismiss valid criticism. The vast majority of users don't side-load on their phones or have any interest in learning to do so. Side-loading is already technical.
Those are two different definitions of technical. When you accuse someone of "getting technical", it has a very specific meaning: It means that they're overemphasizing the dictionary definition at the cost of more practical considerations. That's exactly what's happening here, by calling what Apple allows on iPhone "sideloading". Yes, technically, you are able to get an app on your phone without going through the app store. But without a paid developer account, the fact that you have to reinstall weekly is intentionally designed to make that impractical for actual use.
Sideloading is a made up term anyway with a loose meaning. The most common variant of the definition is:
> install apps that were not approved by the OS vendor and/or delivered via said company’s app store
iOS meets every letter of that.
The weekly resigning limitation is explicitly not about blocking code you wrote for your device, but about blocking piracy. That same feature that allows you to sideload an app you wrote, also allows you to take many paid apps and resign it for your device allowing you to skirt payment. I wish Apple would relax the signing for code that could be provably unique, but I’m sure there’s ways that would be exploited still and it would turn into a constant cat and mouse battle which Apple is choosing to not engage in. Does that suck, yes. Does it mean you can’t side load, no.
Long story short, I would not object to anyone who says iOS sideloading was useless without paying (even though that would be wrong in some folks’ eyes), but trying to claim it doesn’t exist when in reality it just doesn’t meet your (or my) needs feels important enough to say to lose karma over if necessary.
Yes, in the same way that tomatoes meet the definition of a fruit. (And before you say "but tomatoes are a fruit"—exactly.†)
I respect your desire to be precise, but the problem is that it makes conversations super difficult. Detailing Apple's convoluted policies every time the topic comes up is tiresome and needlessly derails the conversation.
> I respect your desire to be precise, but the problem is that it makes conversations super difficult.
Respectfully, this isn’t about being precise, it’s about being factually correct. I feel your tomato example is off the mark. A more fair (albeit not precise) analogy would be someone saying “there is no sun in the sky” and someone correcting them by saying “yes there is, it’s just behind that cloud” and then the person arguing back that “only suns that aren’t behind clouds count”.
I built myself a custom calculator 5 years ago for my Android phone. I've reinstalled it once in that time frame, and that only after switching phones. Other than that, it's just there when I need it, with 0 maintenance in 5 years.
Were I to switch to an iPhone, I would have to either list my calculator on the app store (and pay the yearly developer fee) or have to remember every single week to "refresh" my app, otherwise it won't work the next time I need it.
Android has side loading. iOS has the bare minimum concession to allow developers to build something at all, and even that has unnecessary friction built in explicitly so that people don't try to use it to sideload.
> > Yes, the conditions of side loading (needs a free developer account, must have app signing refreshed weekly, etc) might not be palatable for your taste
> have to remember every single week to "refresh" my app, otherwise it won't work the next time I need it
Did I not fully state that up front? Just because it doesn’t work for your needs (or mine for that matter) doesn’t change that my point is 100% correct, to say that sideloading isn’t possible on iOS is fundamentally wrong. Fake imaginary points (aka HN votes) be damned, I’m not going to cave to the Android fanboys. Apple/iOS has many faults, so I don’t get why folks need to focus on something that isn’t factually correct.
First, just to clear this up: I am not an Android fanboy, and not an iOS hater. My wife has an iPhone and an iPad and they are great for her use case, and I have a lot of respect for the consistency of the experience on an iPhone. I reluctantly have an Android phone because it's the only thing out there that meets my needs at the moment, but I'm under no illusions as to its flaws.
On to the question of side loading: technically, Apple does provide a way to load code not from the app store. Some might call that side loading. However, when most people say that they want to be able to side load apps on their phone, they expect that their apps will function as first-class citizens. Apple's version of side loading is more like a very temporary work visa than a grant of citizenship, which makes it structurally different than what is being asked for.
> First, just to clear this up: I am not an Android fanboy, and not an iOS hater.
Just to be clear, wasn’t calling you that specifically. That was expressing a frustration against the general type who responds whenever I jump in to share info on what exists and doesn’t in reality.
In your case, I made exactly your point up front and even stated it didn’t meet my needs personally. AltStore does a decentish job of smoothing along the process, but it’s still too much burden for me. But I feel I am very correct in saying it meets the technical definition of side-loading and even the “common knowledge” definition as long as you acknowledge the caveats, which I do.
So what do I do? Well, as much as I’ve spent on phones, laptops, tablets, cell service, internet service, etc I find adding another $99/year on top is just noise. I’m fortunate enough that my career allows me that option, otherwise I don’t know what I’d do as Android has dealbreaker faults for me (I do own a few Android phones, but none would remotely be my daily driver).
> (Nearly?) All ChromeOS devices use CoreBoot. You really can't get much more open than that.
Last time I looked, it was really hard to install anything other than ChromeOS on Chromebook hardware. You can install a chrooted Linux on them, yes, but on the device itself you can't even execute unsigned binaries.
Impossible? No. Harder than executing an installer with elevated rights? Yes. Plus, they also come with pre-installed software like Google Docs.
> Android is open source and if you don't buy a locked down device from a carrier,
That's quite a big if. Android itself is open source, yes, but >90% of the ecosystem rely on Google Play services, which are anything but. And, when talking about pre-installed apps that the user can't remove without a lot of effort, Android basically invented that.
> the bootloader is unlockable and the system easily rootable.
If you wipe your device and void your warranty. And then install a third-party binary to actually use those rights, while similarly loosing the ability to use quite a few apps (like banking). That is, if the manufacturer makes it that easy (Xiaomi, for example, needs you to sign up and wait for that - it's possible, but anything but frictionless).
> Now, if you were to mention MacOS and iOS... then you definitely would have had a point
I can't talk about MacOS, to be honest. Though, as far as I know, getting a root shell is not hard and running own software is not a problem.
We agree on iOS, but the grandparent talked about PCs - iOS really does not fall into that category (that's why I explicitly mentioned smartphones).
> Your two examples of something more 'not my pc anymore' than Windows aren't exactly good ones.
Windows is not a good example of that. Don't get me wrong, I don't like windows. But it's by far not the worst example of a locked-down, vendor-owned system and it would be even less bad if the administration UX would be simpler.
I mentioned that exclusion in the first comment. I re-emphasized it in the second comment. If we don't limit ourselves to PCs, I raise you my PS1 - could not even play a burned CD without hardware modifications, let alone customize anything. Predates iOS by 13 years.
> Android is open source and if you don't buy a locked down device from a carrier, the bootloader is unlockable and the system easily rootable.
Yes but it's a subpar experience compared to the closed Android.
I use GrapheneOS since about a year, and I can't do much with my phone anymore. I stay on it for the same reason I have Kubuntu on my PC: it's a relief to know it's not Microsoft's / Google's all seeing eye.
If there was never an "old way" of doing things that didn't involve the new TrustedInstaller system, then would we even be thinking twice about these new restrictions? Or would we just see the restrictions as part of the design of the APIs?
Just because they took a part of the system that used to be externally facing and made it internally facing, I don't think that is the same as making "your PC not your PC anymore". If they were blocking administrators from executing arbitrary code or having arbitrary access to I/Os, that would be a different story.
> But DAMN that quote above is really showing me how bad it is out there !
Actually Windows is quite awesome nowdays. I was using mentioned OSes for years during periods of Windows downs, and since Satya Nadella took the leadership I was very happy with Windows (I primarily spend my time in PowerShell, browser, vscode and using dev tools but have different dedicated installations for games, media etc.)
Now with this can't-turn-off-helicopter attitude I am really considering switching to some Linux variant again. Mac is totally out of question due to similar concerns.
Yeah, it was the only way to remove defender. Then I used debloaters and shutup10 to remove all other "features". Windows didn't like it and returned ALL of them on update. Now I disabled update, and are totally motivated to go back to linux.
Luckily all the tools I use on Windows are x-platform and with PowerShell, vscode, sql server etc. on linux and games working nothing holds me any more. I will probably miss Autohotkey and Foobar2k (maybe total commander but Dobulecmd is decent alternative and much better in some domains).
>...and are totally motivated to go back to linux.
Windows 10 was what broke me and got me to using Linux full time. Before that, I barely knew how to even just get my way through debian to resume a disconnected screen session. Now I prefer to be in Linux. Even if the software that I can run using Wine/PlayOnLinux/Proton/Lutris isn't 100%... it's sufficient to where I don't miss being on Windows at all.
I recently upgraded my main system and used the extra parts to rebuild my Win 10 standby box. I grew up on Windows. Started with Win3.0/DOS and used every iteration except WinME and Vista... and the rebuild only reminded me how much of a pain in the ass Windows is to install. 10 still has a lot of the usability bugs I've encountered from way back in the Win98 days but all the extra crap we have to deal with now (most inconsistent and overencumbered UI ever) just makes it even more of a chore to use than it ever has been.
Totally removing defender as TI is the only option if you dont want it turning itself back on arbitrarily. I went through this hell yesterday for about 3 hours.
> It was working like that before, but on latest updates it automatically turns on every restart (or so).
that's if you disable through the normal settings interface. the group policy settings stick, although you might have to turn off "tamper protection" first before applying the group policy.
Because he wants to removed unwanted software from his machine, not disable it. It's not dissimilar to being unable to remove bundled software on android.
What's the difference, that you save 200MB of disk space?
>It's not dissimilar to being unable to remove bundled software on android.
It actually makes less sense on android since bundled apps are typically installed on the /system partition, which means they don't really take up any disk space (the space allocated to the /system partition is the same regardless of whether the app is there or not).
Feature, I'd say. Volume Shadow Copies are used to make consistent online backups of an NTFS file system. I don't think non-admin users are normally able to make them in the first place, and if admin is required, it's hard to see the fuss.
Definitely a bug. The Unix equivalent would be a package update silently making /etc/shadow world readable, exposing the hashed passwords of local users.
Not a big deal for a single user machine — there’s nothing you can do with this that you can’t do some other way as a local admin/root — but not good if you have untrusted, non-admin user accounts.
Huge oversight, it renders and breaks previous GPO, meaning that the tech debt in Windows NT itself is huge.
Which makes you wonder really how comfy OSX POSIX is, but they had their own fun bugs lately too, sudo for example and other ones. Plus they're doing an CPU Arch jump.
I have the same strong feelings about walled sites and tracking. May I recommend installing an extension to disable paywalls/tracking? Something like https://github.com/iamadamdev/bypass-paywalls-chrome (supports firefox despite the project name) which automatically wipes cookies from sites like Medium which enable "sign on" requirements after so many visits. It really improves the browsing experience.
I used to open these sites incognito or delete the cookies manually but it's really such an annoyance. Better to automate the policy of disallowing these folks to store cookies.
Then why visit such sites? I deliberately do not install extensions that hide problems for me. I want it to be cumbersome. I want to get annoyed so that I get discouraged to use the site in the first place, and I get reminded of that fact with every single visit.
If the few that do care still visits there is no incitement for the site to not do it.
Me neither. The forced login just reminds me that this is a site I don't want to visit, and a tab that I want to close. I'm not going to configure a workaround for every obstacle the site operators throw in my way; if they don't want me to visit, I won't visit.
I think they'll eventually learn not to do these things (see also cookie banners). But if they don't, it doesn't matter to me. There's other content available.
They need to make money. I guess they gave up on advertising directly and instead want to capture our info alongside what we read on the website to build profiles they can sell to marketers.
Possibly I am missing something, but the use of volume shadow copies or direct (RAW) disk access to retrieve particular files that are "in use" is a long time established possibility.
Extents and Rawcopy were initially written several years ago:
The vulnerability here is that regular non-administrator users can also read sensitive registry hives from the shadow copy. This allows for local privilege escalation exploits.
I see, thanks, I never tested the mentioned programs as a non-Admin user, though the mechanism (if the shadow copies are used) is seemingly the same, so if the BUILTIN/USERS are authorized, they may work as well (and not only on Windows 10).
I guess one way to phrase it would be "the ACLs on the registry files were always overly permissive, but nobody noticed until now because trying to read them the obvious way failed with 'file in use'"
"To keep reading this story, get the free app or log in." FUCK. YOU. Remember when people just published informative and thoughtful stuff online without expecting monetization? Yeah, I and Pepperidge Farm remember, but it seems to have become a lost art. It's worth it to forgo this article, no matter how interesting it seemed to me, to encourage the author and others to publish their blogs to be readable by all.
Or perhaps, once someone installs untrusted software in the first place, you’re screwed anyway?
This is security 101. AFAIK, you can login as a local admin since forever and it’s never been fixed. I just used it recently to access a deceased relative’s computer.
Windows is a multiuser system and tries to give you a reliable security barrier between two (non-admin) users. And at least since Windows Vista it puts some effort into preventing non-elevated software from gaining admin rights, limiting the amount of damage it can do somewhat.
Of course in reality installing any untrusted software on a computer that's not airgapped from everything you care about isn't safe. But that doesn't mean we shouldn't at least try to give better security guarantees.
If all the OS security measures are useless in the face of untrusted software, why were they introduced? Should we just run Windows 98 and FAT32 on our servers since it's apparently basic security knowledge that Windows NT's system of user accounts and permissions doesn't work?
I was surprised to find that a modern windows 10 machine (with all default security options) could have the user password bypassed easily with a Windows setup USB.
I could then read all the user's documents.
I thought the point of disk encryption and secure boot was to prevent that. Yet somehow the hole of allowing Windows setup to give you a privileged command prompt with a decrypted disk was never closed...
You can bypass user login by simply removing the drive and access the data on it. This is not a bug or vulnerability this is completely normal for unencrypted disks.
Default options do not enable any drive encryption
Secure boot is as the name says something to make booting secure it has absolutely nothing to do with protecting data on disk from being accessed by someone with physical access to the machine.
A user password doesn't enable encryption. Bitlocker or another Full Disk Encryption solution is what you would want to use. If you can see the data, that means it's not encrypted.
But doesn’t Windows 10 ship with device encryption? Ie full disk encryption? I thought that’s exactly what this was, which is what I’m not understanding. How can you see data if the device is encrypted?
Windows home supports device encryption if you meet certain hardware requirements. (A TPM 2.0 chip, apparently) My laptop doesn't meet those requirements so I've never looked into it further.
Windows pro supports encryption with all hardware.
Some Windows configuration have bad permissions on their SAM database.
If a standard user has access to shadow copies (VSS), this can lead to privilege escalation.
Microsoft recommends to [1]:
1) Restrict access to the contents of %windir%\system32\config:
- Command Prompt (Run as administrator): icacls %windir%\system32\config*.* /inheritance:e
- Windows PowerShell (Run as administrator): icacls $env:windir\system32\config*.* /inheritance:e
2) Delete Volume Shadow Copy Service (VSS) shadow copies:
- Delete any System Restore points and Shadow volumes that existed prior to restricting access to %windir%\system32\config.
- Create a new System Restore point (if desired).
--
Also, please note that some authorities seem to adress this subject carefully. The French national cybersecurity agency (ANSSI) has for instance published a News bulletin [2] but no "real" Security bulletin of this vulnerability [3].
In its News bulletin, the ANSSI specifies that it also affects Windows Vista RTM :).
However, the ANSSI also says that deleting VSS entries (step 2 of Microsoft recommendations) "must be decided after evaluating the advantages and disadvantages with regard to the risks, in particular because there may be other possibilities for privilege escalation depending on the level of security of your information system."
I am confused how having read access to the registry allows local privilege escalation. As a Linux user, having read access to the registry sounds like having read access to /etc, which every user already has. What sensitive data is stored in SAM that allows that?
It seems like there are some cases where Windows accepts a password hash for authentication as a user though. So by having the hash of an administrator, you can escalate privileges.
Agreed. The article also does not seem to explain it. From what I understood the SAM only stores encrypted password hashes, nothing that could be readily exploited for local privilege escalation.
There's a video (bleh) which appears to extract the hash and then use a pass-the-hash. I'm not clear on exactly what the preconditions are (are NTLMv1/v2 hashes still stored by default? Does PTH work with newer hashes? Etc) or if there's another way to escalate
So one of the most wonderful things about relying on their proprietary closed source operating system is that you can't have external code audits. You just kind of wait for ethical people to come forward and explain bugs they've found and wonder, 1, how long has it been there, 2, how long have bad actors known about this, 3, how many other bugs are just like this or worse that they haven't found yet, 4, do I need to recreate VM images or can I trust the internal patch process to get it installed before I've been exploited, 5, does the patch actually fix the underlying security flaw or is it something they're calling a "feature" now that will always be an issue... I'm so grateful to not be a janitor for Microsoft Windows software anymore.
You're mixing a lot of things for no reason, the problem you describe really have very little or even nothing with open source or proprietary or even OSes.
Points 2/3/4 are exactly the same on other OSes, even open sources ones.
Point 1 might be easier to answer by yourself/someone who is not the vendor with open source OSes, while for Windows or OSX you depend on the vendor to tell you with certitude "starting with X" (which they always do). But on the other hand the centralized and streamlined patching model makes it much much easier to identify just which patch caused it, compared to "which level of package mainter or upstream caused it, is it a flaw in SOFT or in debian's SOFT-up3 or what ?"
Point 5 has nothing to do with open source either, on either you can easily test if it's fixed or not.
Whether it's considered bug of feature-wont-fix is pretty much always answered so you don't have to actually ask yourself (but if they do consider it normal then you can't fix it yourself on closed source proprietary, though they usually give you a config change to get what you want).
> You just kind of wait for ethical people to come forward and explain bugs they've found
And the same apply to open source software. It's not like all the bugs in open source software was fixed in audits or that you somehow magically know how long time the issue has been attacked by bad actors.
Microsoft Windows is proprietary software yes, but they have something called the Shared Source Initiative.
> Through the Shared Source Initiative Microsoft licenses product source code to qualified customers, enterprises, governments, and partners for debugging and reference purposes.
I say this as someone who doesn’t like Windows and doesn’t run Windows. We still need to admit that Microsoft does indeed let others read the source code, only that they decide who gets to read it and not.
The problem is that it would be dangerous for any FOSS developer to be chosen among those who can see their sources for obvious legal reason. Anyone willing to be exposed to Microsoft's IP and NDAs that way is probably already so tied to them that we couldn't count on any independent security auditing and reporting without Microsoft authorizing it.
The key question is: would they let people who want to find bugs? Because that is the point here, if you can read the software but not allowed to do an audit, it doesn't make any difference (for the issue that we're discussing).
Can you clarify the distinction? They share the source code so that other people can do auditing, obviously. But what would be the scenario where you are allowed to read the code, but you're not allowed to look for issues? Have you ever seen that set up anywhere? It would not make any sense.
If you're asking me personally, OpenSSL always had a funny smell even at the time, and so did TLS, simply because it seemed all way too complicated. TLS v1.3 agrees. As far as TLS implementations go I think pretty much all of them have had major, critical flaws. Microsoft's SChannel has had an RCE since it was born, patched the same year as Heartbleed, Apple's Secure Transport had goto fail (also in 2014 if I recall) etc.
Microsoft can easily pay for external software audits. They just need them to sign an NDA or other agreement that the access to code is only to be used to audit the code, and nothing else.
https://rwmj.wordpress.com/2010/02/18/why-the-windows-regist...