Hacker News new | past | comments | ask | show | jobs | submit login

Nowadays it's almost impossible to uninstall an app completely, because most of them creating files willy nilly. And it's same on all known OSes. The side effect we see is system size growing in time.

IMO running an app in a sandbox should be the default option.

On Windows, I used to like sandboxie, which virtualized every write into single directory. Uninstall was easy as removing that dir.

This MS sandbox doesn't allow you to continually run an app in the sandbox, as all data get's destroyed on app close, so it's not sandboxie (or similar) replacement.




> Nowadays it's almost impossible to uninstall an app completely, because most of them creating files willy nilly.

This has always been the case on Windows. In fact if anything, nowadays it’s better than its ever been because thanks to the UAC and other controls Microsoft have put in place, developers aren’t so free to do whatever they like to the host machine. But that’s remember a time before the UAC when it would often be common practice to reinstall the OS on a semi-regular basis (not something I personally engaged in but a great many of my peers used to).

> And it’s same on all known OSes

It really isn’t. On platforms with a proper package manager you can query what files get installed where. A great many package managers even let you query a file system file and see which package installed it.

Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).


It really is. I'm not talking about app binaries only. But about all files that app creates after install. Most of the reside in home dir, but stays there forever. Like various cache files, settings, ... And most of the time they are not confined to single dir.


That has always been the case though. For as long as I've use Linux as a desktop my $HOME directory has been littered with dot-files and folders. And as for Windows, things used to be so much worse. Since the UAC, Windows applications have been limited in where they can write to lest they annoy their users with frequent escalation prompts. Before the UAC developers often used to write files all over the place - it was a complete nightmare! In fact one of the primary purposes of the UAC - as I recall - was to reign developers in.

Even the UAC aside, on Windows you now have the application data directory and permissions on the registry which both take some reliance off random files dumped anywhere. Before then Windows was like the wild west. And we're not talking that long ago in terms of the history of Windows - Vista was released 11 years ago and it took a few years after that for developers to catch up.

Plus with the trend of moving everything to the web, you're getting fewer native applications which can write those random files in seemingly random locations (that's one of the few good things about the move to web applications in my personal opinion).

You'll always have problems with developers having their own opinions - that's inescapable. But things used to be so much worse.


> Most of the reside in home dir, but stays there forever. Like various cache files, settings, ... And most of the time they are not confined to single dir.

I think you need to support that statement. I believe the vast majority of software on common Unix distros creates no files in $HOME[1], and of those that do the majority use one folder in home[2], which *should+ be used for configuration, and often you don't want it automatically uninstalled on software removal.

The few I can think of that quote to multiple locations do so because the extra locations are shared folders. For example, I would not want my downloads directory removed on uninstallation of Firefox.

  1: E.g. Most things in /bin, and /usr/bin.

  2: other than what I outlined above, I can't think of any that use multiple directories. If it's truly a common as you say, you should be able to provide some examples.


He's referring to the XDG standard [0], I think. It used to be that all persistent user-configuration resided in ~/.${appname}, but some people were unhappy with that so they recreated the etc|var|lib|tmp filesystem usage distinction inside users' home directories. This means that an application's user files are now spread across $XDG_DATA_HOME, $XDG_CONFIG_HOME, $XDG_CACHE_HOME and $XDG_RUNTIME_DIR.

[0] https://specifications.freedesktop.org/basedir-spec/basedir-...


You're contradicting yourself in your first and second paragraph...

Those proper package managers still rely on the packager doing things correctly - just as it would creating a windows .msi.

There's plenty of linux packages that creates files during operation in their designated /var/log/xxx /var/db/xxx /etc/xxx /home/xxx/ directories that you're not able to query using the package manager.


> You're contradicting yourself in your first and second paragraph...

Those two paragraphs are talking about different OSs. 1st paragraph is talking about Windows, 2nd paragraph is talking about non-Windows systems with first-class package managers such as ArchLinux, Debian, CentOS, FreeBSD, etc.

> Those proper package managers still rely on the packager doing things correctly

Sure, but the point is you can query what the package manager has done.

> There's plenty of linux packages that creates files during operation in their designated /var/log/xxx /var/db/xxx /etc/xxx /home/xxx/ directories that you're not able to query using the package manager.

That's half true. You can query that /var/db/xxx and /var/log/xxx has been created by the package manager and often the directories (and their contents) will be owned by the user which the daemon runs under.

However I do agree with the point regarding your $HOME directory and actually made that point myself:

> Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).

As an aside, you can also query what files a particular application has open. In fact there are a few ways to do this from querying the /proc/$PID directory through to tools like `lsof`.


I have plenty of files in /var/lib/ that are not owned by any package, same in /var/log/ , /var/cache/ , /etc/sysconfig/ and other directories - their parent directory is owned by a different package than the ones creating these files.

I'm not arguing that a decent package manager is a better than none - but they are solving all issues you claim they do.

Pretty much all OSs, including windows, have ways to view which processes has a file open


> I have plenty of files in /var/lib/ that are not owned by any package, same in /var/log/ , /var/cache/ , /etc/sysconfig/ and other directories - their parent directory is owned by a different package than the ones creating these files.

Got any examples of that? You'd expect only docker to write to /var/lib/docker, mysql to write to /var/lib/mysql. etc. Not discounted that I've overlooked something but a quick look in my /var/lib and it's easy to see what is managed by what. So I'm curious what instances you have of a package manager creating a directory and then a completely unrelated daemon writing to that directory.

> I'm not arguing that a decent package manager is a better than none - but they are solving all issues you claim they do.

I'm not claiming they solve all the problems - in fact I literally identified a few problems they don't solve! Plus even those points I identified aside, there will always be edge cases for thing that package manager should have solved but failed to do so.

Perhaps we should turn this discussion on it's head and discuss better ways to solve the problems people are describing? What would your solution be? Or are you ostensibly agreeing with my points but being contrary just for the sake of playing devils advocate?

> Pretty much all OSs, including windows, have ways to view which processes has a file open

Isn't that literally what I just said? (plus I gave a few examples too).


> Got any examples of that?

Not the person you were talking to, but looking at certbot, it puts files into /lib/systemd/system/ and /etc/cron.d/ with root:root.


Thank you. I've not used certbot so excuse the dumb question, but is certbot doing that during install (ie via the package manager) or during program execution (ie when the certbot ELF is launched)?

I shouldn't expect too much in /lib/systemd/system is installed outside of package managers but I agree it does happen and at least they're generally quite easy to identify which service file does what.

crontab is definitely one of those nasty things that can often get forgotten about though (and I speak from unfortunate experience there hah!)

We're really drifting into the domain of Puppet and it's ilk now though.


I'm not sure when those files get created, I just knew about that example off the top of my head because I had to spend some time figuring out why our post-renew hook wasn't working.

dpkg -L helps a lot when figuring out where all the files get spread.


> dpkg -L helps a lot when figuring out where all the files get spread.

Well yeah, that was the central point of this conversation :)


Isn't the parallel to UAC a properly configure SELinux? I thought that was the component that lets process rwx from certain locations? I guess a full comparison may be including applocker too.

Not to hot on linux management options I just install the thing over and over.


One trick I use when trying to see where in $HOME a program creates files is to create a new user with an empty $HOME, run the program and then see what files were created. If it's a GUI program, give it permission to run from your regular user with xhost so you don't need to login through the desktop manager.


well, I usually do something alike, though just by changing the environment variable: HOME=$HOME/tmp myprogram. Symlinking the .Xauthority file (if using X) works quite well.

I actually always run that way most applications that do not fully adhere to the XDG base dir specification.


> Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).

The really tricky problem is when a package must modify an existing shared resource. Such as appending lines to an existing config for example.


The really tricky problem is when a package must modify an existing shared resource. Such as appending lines to an existing config for example.

This is currently solved by having applications support both a config file and a config.d directory. The primary owner (package) of the resource modifies the conf file, while secondary packages drop their own config in conf.d/${package}. Numerous examples exist: logrotate, rsyslog, apache, nginx, systemd and apt come to mind.


> The really tricky problem is when a package must modify an existing shared resource. Such as appending lines to an existing config for example.

Pacman creates a .pacnew file and lets you merge it yourself for this very reason.


Seems like that is offloading the tricky bit to the user rather than solving the tricky problem to be honest.


Yeah that's exactly what's happening. But Arch is an intentionally hands on distro (eg it doesn't even ship a an installer - instead you're expected to do everything yourself via the command line).

Obviously this wouldn't be to everyone's tastes but it's good that market is catered in my opinion (but then I would say that as I'm very much a hands on person).


Yeah, but it's very transparent to the user, which is kind of the Arch Way, and to be fair, there are tools to help you, like pacdiff.


I've had the installation of apt-get packages permanently hose an ubuntu or debian install. It's all up to packagers to author their packages right so they don't leave garbage on your machine that you have to manually clean up (or give up and reformat).


You're comment is very light on detail so it's hard to understand your issue properly but I've been running Linux as my primary desktop for more than 15 years and have managed literally hundreds of Linux servers too and never had a package manager hose my platform (big caveat: aside the notorious `filesystem` update on ArchLinux but that one is an extreme edge case scenario due to the rolling release nature of Arch. However even package was well documented on Arch's site beforehand as being a package that required manual steps to upgrade).

It's true that Linux package managers used to be buggy and problematic in the 90s but those days have long since gone. And while I'm not discounting that a package upgrade could damage your system, the instances when they do are highly unusual rather than a typical problem users face with each and every upgrade. In fact Windows sysadmins have far more dread with running Windows updates than Linux admins do and yet Windows updates are only focused on Microsoft products rather than every piece of software on the system.

> It's all up to packagers to author their packages right so they don't leave garbage on your machine that you have to manually clean up (or give up and reformat).

Actually it's not. It's up to the application developers to do that. If you specify a package to install a file `x` to location `y` then the package manager will uninstall that file automatically too. You don't specifically need to tell the package manager to do that (or at least not with any of the packaging systems I've used). But if the application developer writes the application to spew out thousands of files into $HOME, that happens outside of the package manager. There isn't a whole lot you can do to stop that aside limit the directories which your application has permission to write to (either via chroot, containerisation, user/group permissions, SELinux, or other forms of ACL. There's actually plenty of tools on Linux / UNIX to handle that problem).


Don't know about apt specifically, but using pacman (Arch Linux), you can list exactly what files on your filesystem were installed by what package and remove them. You can't do this on Windows, as far as I know.


As a kid, my favorite game was Norton CleanSweep. I couldn't stop watching it restore state, it was a bliss.

ps: coincidentally, I was just starting to use linux firejail on a daily basis.. very very useful.


Yes firejail is awesome, but you can only block writes to directories. What I'm looking for is an option to redirect all writes to single directory. This should be transparent (app still might think is writing willy nilly, but in reality all writes would be redirected let's say to ~/app).


I'm pretty sure you actually can do this with firejail, see: --overlay and --overlay-named. For some reason it looks like these are hardcoded (yay, UNIX culture!) to point to `$HOME/.firejail/<progname or name>`.


this is exactly what i was looking for. thanks


> Uninstall was easy as removing that dir.

File writes for application files are rarely the problem any more.

The problem is that in order to function correctly (For some definition of correct, but say e.g. to associate file extensions, create shortcuts, start automatically, install a dependency such as a C++ runtime patch, whatever) the program needs to write to subsystems of the OS in a non-reversible way. It's also very HARD to do these things (create setups) because systems like Windows Installer aren't trivial to use. Every time a setup author makes a mistake there is a risk of stuff being left behind.

Fundamentally, what you are doing is you are in state A when installing the program, creating state B. Then you continue to modify the system simply by using it or installing some more software creating state C. If you now uninstall the first software you don't have anything but a script undoing A->B, which run backwards can only do B->A, but you are in state C and you don't want to first run C->B because you want to keep the other parts of state C. So the uninstall script has to run in unknown territory (a file may have changed, a later dependency version may have been installed globally, a registry entry may not exist because they are NOT isolated per application etc) so the uninstall script just has to do what it can.

A sandbox could be a solution to this, where the sandbox contains diff views over some immutable base image. It probably is a lot easier to do (and do efficiently) with OS support.


Isn’t macOS doing that? The Mac App Store only allows sandboxed apps and macOS allows almost only apps from the store to be installed (+ certified developers) unless you change the system's settings.


On macOS some protection was added, so apps cannot write to system protected directories.

But I was talking about all files that app creates. Like files in home dir (eg. ~/Library). If you remove the app, those files stay there and occupy space.

The only way you can partly clean up the mess, is to delete home dir from time to time (but backup important files first). Even then, there might still be files in /usr/local etc.


A macOS app installed from the app store can only write to ~/Library/Containers/name.of.app.bundle. Those are not automatically trashed (as far as I know), but it is much easier to clean than the whole ~/Library. Actually, if all your apps are in /Applications it would be easy to write a small script that deletes everything in Containers that's a: not from Apple and b: doesn't have a app bundle identifier in /Applications


TBH I didn't know about that. Probably same situation is with Windows UWP apps installed from store (but there is special permission to grant access to whole fs, which allow app write outside it's sandbox dir). Anyway there are so many apps that are not installed from app stores. IMO having proper sandbox is still a thing in 2018.


And there are in fact “app cleaners” that do exactly this.


The app I use to do that is literally called AppCleaner, been using it for years it’s one of the first things I install.

For example, the other day I moved Word to the trash, 5 seconds later I get the AppCleaner pop up letting me know it found an additional 2GB of shit that Word just littered around my machine that wouldn’t have gotten removed by just deleting the app. And unfortunately, that definitely hasn’t even been the worst offender I’ve run into, and at this point I’m very rarely not surprised by the amount of leftover crap that doesn’t get removed when deleting an app.


> Nowadays it's almost impossible to uninstall an app completely, because most of them creating files willy nilly. And it's same on all known OSes. The side effect we see is system size growing in time.

Unless I am mistaken, I don't think this is the case for iOS, Android, ChromeOS, FirefoxOS, and many game consoles.

This is really just a problem with desktop and server operating systems, not with operating systems as a whole. It's also getting bettwe with package managers, the Windows Store, and UAC.


Yes, I forgot to mention mobile OSes. Specially iOS, doesn't keep any app files on disk when app is uninstalled. Android apps tend to keep files regularly on SD card (virtual or real one). Some apps might benefit from this (eg. you don't have to redownload huge map files for navigation app), but paradoxically Sygic Navigation app isn't storing map files on sdcard, but some crappy apps, where it doesn't make sense are. So in practice it's not very different from what we have on PC.


UWP and/or the Windows Store has a way of packaging applications so they don't barf all over the system, assuming they're not maliciously designed to subvert this.


Frankly, the very concept of "installing" an application is a ridiculous invention. Many systems of the past had self-contained applications that could just be dragged around between disks, copied, and deleted, seamlessly meshing with the files & folders desktop metaphor. Of course none of those systems enforced this behavior, which is something we could do today but, for the most part, don't.

At least on Windows I have my pick of thousands of Portable Apps (and most Windows software can act as a portable app if you just extract it without installing it anyway, albeit still leaving junk in the registry). You know what's a great feeling? Being able to reinstall your OS and just pointing a new toolbar at wherever you keep your portable apps and being good to go.


You said ‘systems of the past’, but isn’t this how macOS works?

If I download an app, I’m anticipating a .dmg to mount, which holds a self-contained .app which runs anywhere, and a link to /Applications, to suggest a sensible place to put it.

It’s not enforced, but it is the norm.


> And it's same on all known OSes.

I guess you're missing one. Android allows you to remove everything that app created.


Total Uninstall. Basically a system diff tool. I run it before & after every install.


And this is where/when/why Docker (or other 'container' concept) will win.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: