Hacker Newsnew | past | comments | ask | show | jobs | submit | forrestthewoods's commentslogin

> why are we focussing on making things accessible to AI

Because that’s the authors actual goal? To take a web page that looks fine to human eyes but is unintuitively not accessible to AI. That’s genuinely useful and valuable.

Sure it’s no different than converting it to markdown for human eyes. But it’s important to be clear about not just WHAT but also WHY.

C’mon now. This isn’t controversial or even bad.


Makes one wonder what apple’s actual goal is

I mean.... it could have broader appeal without artificially restricting its audience

Not everything is for everyone. Y'all are criticising this project's bean soup because you don't like beans.

If you think it has less appeal because it was built for and advertised for AI that’s a you problem.

> I'm not writing an app, just a CLI tool

but CLI tools are applications


No, they are not. Those two are very different in macOS, where the word ‘app’ means an Application Bundle, which is a directory with a .app extension, Info.plist file, a bundle identifier, have an expected directory structure per Apple guidelines, should be installed in /Applications or ~/Applications, and so on and so forth.

CLI tools, including ones that Apple ships or makes, are not apps on macOS.

I’m sorry, this is my pet peeve as well and it’s very frustrating to see this ‘CLI tools are apps’ argument from developers who are not familiar with the Apple guidelines, and then argue about on an ideological basis.


That is covered in the article. An “App” on Mac is a specific thing with certain characteristics that CLI tools don’t have.

This “well ackchyually“ is not particularly helpful. You’re not wrong. But this argument was lost 30 years ago. The programming field has been using “Big O” loosely.

I've had conversations at work where the difference was important. Mass adoption of quicksort implies I'm not the only one.

I’m gonna blow your mind. If it happens I’m going to loudly criticize both!

It’s a bloody shame that Linux is incapable of reliable running software programs without layers and layers of disparate, competing abstractions.

I’m increasingly convinced that the mere existence of a package manager (for programs, not source code) is a sign of a failed platform design. The fact that it exists at all is a miserable nightmare.

Flatpak and Snap tried to make this better. But they do too much which just introduced new problems.

Steam does not have this problem. Download game, play game. Software is not that complicated.


> Steam does not have this problem. Download game, play game. Software is not that complicated.

Steam on Linux essentially has its own "package manager" which uses containerized runtimes: https://gitlab.steamos.cloud/steamrt/steam-runtime-tools


The Steam Linux Runtime is pretty bare bones. Their most recent runtime hasn’t been updated in 4 years. That’s quite different.

> Their most recent runtime hasn’t been updated in 4 years. That’s quite different.

Bad, even.


False. The exact opposite of bad.

The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.

Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.


> False. The exact opposite of bad.

I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes. Not everything needs to be Arch, but this opposite extreme is also bad.

> The “system” should provide the barest minimum of libraries. Programs should ship as many of their dependencies as is technically feasible.

And then application developers fail to update their vendored dependencies, and thereby leave their users exposed to vulnerabilities. (This isn't hypothetical, it's a thing that has happened.) No, thank you.

>Oh what’s that? Are crying about security updates? Yeah well unfortunately you shipped everything in a Docker container so you need to rebuild and redeploy all of your hierarchical images anyways.

So... are you arguing that we do need to ship everything vendored in so that it can't be updated, or that we need to actually break out packages to be managed independently (like every major Linux distribution does)? Because you appear to have advocated for vendoring everything, and then immediately turned around to criticize the situation where things get vendored in.


> I don't mind stable base systems, I don't mind slow and well tested updates, I actively like holding stable ABIs, but if you haven't updated anything in 4 years, then you are missing bug and security fixes.

I'm not sure GP's claim here about the runtime not changing in 4 years is actually true. There hasn't been a version number bump, but files in the runtime have certainly changed since it's initial release in 2021, right? See: https://steamdb.info/app/1628350/patchnotes/

It looks to me like it gets updated all the time, but they just don't change the version number because the updates don't affect compatibility. It's kinda opaque though, so I'm not totally sure.


> So... are you arguing that we do need to ship everything vendored in so that it can't be updated,

I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.

Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker. And that Docker (or similar) are actually necessary to reliably run programs on Linux because TheLinuxModel is so bad and has failed so badly.

If you believe in TheLinuxModel and also do not use Docker to deploy your software then you are, in the year 2025, a very rare outlier.

Personally, I am very pro ShipYourFuckingDependencies. But I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.


> I’m arguing that the prevalence of Docker is strong evidence that the “Linux model” has fundamentally failed.

That is a very silly argument considering that Docker is built on primitives that Linux exposes. All Docker does is make them accessible via a friendly UI, and adds some nice abstractions on top such as images.

It's also silly because there is no single "Linux model". There are many different ways of running applications on Linux, depending on the environment, security requirements, user preference, and so on. The user is free to simply compile software on their own if they wish. This versatility is a strength, not a weakness.

Your argument seems to be against package managers as a whole, so I'm not sure why you're attacking Linux. There are many ecosystems where dependencies are not vendored and a package manager is useful, viceversa, or even both.

There are very few objectively bad design decisions in computing. They're mostly tradeoffs. Choosing a package manager vs vendoring is one such scenario. So we can argue endlessly about it, or we can save ourselves some time and agree that both approaches have their merits and detriments.


> That is a very silly argument considering that Docker is built on primitives that Linux exposes

No.

I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).

Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs. A Windows PATH is relatively clean from random shit. (Less so when Linux-first software is involved). Stuffing a million libraries into /usr/lib or other PATH locations is a Linuxism. I think this Linuxism is bad. And that it’s so bad everyone now has to use Docker just to reliably run a computer program.

Package managers for software libraries to compile programs is a different scenario I’ve not talked about in this thread. Although since you’ve got me ranting the Linuxisms that GCC and Clang follow are also fucking terrible. Linking against the random ass version of glibc on the system is fucking awful software engineering. This is why people also make Docker images of their build environment! Womp womp sad trombone everyone is fired.

I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.


> I am specifically talking about the Linuxism where systems have a global pool of shared libraries in one of several common locations (that ever so slightly differs across distros because fuck you).

> Windows and macOS don’t do this.

macOS does in fact have a /usr/lib. It's treated as not to be touched by third parties, but there's always a /usr/local/lib and similar for distributing software that's not bundled with macOS just like on any other Unix operating system. The problem you're naming is just as relevant to FreeBSD Ports as it is to Debian.

And regardless, it's not a commitment Nix shares, and its problems are not problems Nix suffers from. It's not at all inherent to package management, including on Linux. See Nix, Guix, and Spack, for significant, general-purpose, working examples that don't fundamentally rely on abstractions like containers for deployment.

I totally agree with this, though, and so does everyone who's into Nix:

> Stuffing a million libraries into /usr/lib [...] is bad.

> I don’t blame Linux for making bad decisions. It was the 80s and no one knew better. But it is indeed an extremely bad set of design decisions. We all live with historical artifacts and cruft. Not everything is a trade off.


> Windows and macOS don’t do this. I don’t pollute system32 with a kajillion random ass DLLs.

You can't be serious. Are you not familiar with the phrase "DLL hell"? Windows applications do indeed put and depend on random ass DLLs in system32 to this day. Install any game, and it will dump random DLLs all over the system. Want to run an app built with Visual C++, or which depends on C++ libraries? Good luck tracking down whatever version of the MSVC runtime you need to install...

Microsoft and the community realized this is a problem, which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store.

So, again, your argument is nonsensical when focused on Linux. If anything, Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications. You're not obligated to use any specific package manager.


> which is why most Windows apps are now deployed via Chocolatey, Scoop, WinGet, or the MS Store

rofl. <insert meme of Inglorious Bastards three fingers>

> Good luck tracking down whatever version of the MSVC runtime you need to install...

Perhaps back in 2004 this was an issue. That was a long time ago.

You use a lot of relevant buzz words. But it’s kinda obvious you don’t know what you’re talking about. Sorry.

> Linux does this better than other operating systems since it gives the user the choice of how they want to manage applications

I would like all Linux programs to reliably run when I try to run them. I do not ever want to track down or manually install any dependency ever. I would like installing new programs to never under any circumstance break any previously installed program.

I would also like a program compiled for Linux to just work on all POSIX compliant distros. Recompiling for different distros is dumb and unnecessary.

I’d also like to be able to trivially cross-compile for any Linux target from any machine (Linux, Mac, or windows). glibc devs should be ashamed of what they’ve done.


> Perhaps back in 2004 this was an issue. That was a long time ago.

Not true. I experience this today whenever I want to use an app without a package manager, or one that doesn't bundle the VC runtime it needs in its installer, or one that doesn't have an installer.

> You use a lot of relevant buzz words. But it’s kinda obvious you don’t know what you’re talking about. Sorry.

That's rich. Resort to ad hominem when your arguments don't hold any water. (:

> I would also like a program compiled for Linux to just work on all POSIX compliant distros.

So use AppImage, αcτµαlly pδrταblε εxεcµταblε, a statically compiled binary, or any other cross-distro packaging format. Nobody is forcing you to use something you don't want. The idea that Linux is a flawed system because of one packaging format is delusional.

You're clearly not arguing in good faith, and for that reason, I'm out.


I assure you my faith of argument was good. Cheers.

> Many people disagree with that claim and think that TheLinuxModel is good actually. However I point that these people almost definitely make extensive use of Docker

You've got the wrong audience here. Nix people are neither big fans of "the Linux model" (because Nix is founded in part on a critique of the FHS, a core part and source of major problems with "the Linux model") nor rely heavily on Docker to ship dependencies. But if by "the Linux model" you just mean not promising a stable kernel ABI, pulling an OS together from disparate open-source projects, and key libraries not promising eternal API stability, it might have some relevance to Nixers...

> I also dont think that deploying a program should be much more complicated than sharing an uncompressed zip file. Docker adds a lot of crusting. Packaging images/zips/deployments should be near instantaneous.

Your sense of "packaging" conflates two different things. One aspect of packaging is specifying dependencies and how software gets built in the first place in a very general way. This is the hard part of packaging for cohesive software distributions such as have package managers. (This is generally not really done on platforms like Windows, at least not in a unified or easily interrogable format.) This is what an RPM spec does, what the definition of a Nix package does, etc.

The other part is getting built artifacts, in whatever format you have them, into a deployable format. I would call this something like "packing" (like packing an archive) rather than "packaging" (which involves writing some kind of code specifying dependencies and build steps).

If you've done the first step well— by, for instance, writing and building a Nix package— the second step is indeed trivial and "damn near instantaneous". This is true whether you're deploying with `nix-copy-closure`/`nix copy`, which literally just copy files[1][2], or creating a Docker image, where you can just stream the same files to an archive in seconds[3].

And the same packaging model which enables hermetic deployments, like Docker but without requiring the use of containers at all, does still allow keeping only a single copy of common dependencies and patching them in place.[4]

--

1: https://nix.dev/manual/nix/2.30/command-ref/nix-copy-closure...

2: https://nix.dev/manual/nix/2.30/command-ref/new-cli/nix3-cop...

3: https://github.com/nlewo/nix2container

4: https://guix.gnu.org/blog/2020/grafts-continued/


> Programs should ship as many of their dependencies as is technically feasible.

Shipping in a container just is "ship[ping] as many [...] dependencies as is technically feasible". It's "all of them except the kernel". The "barest minimum of libraries" is none.

Someone who's using Docker is already doing what you're describing anyway. So why are you scolding them as if they aren't?


I’m scolding the person who says vendoring dependencies is bad… but then uses docker for everything anyways.

> I’m increasingly convinced that the mere existence of a package manager (for programs, not source code) is a sign of a failed platform design.

Nix is a build system for source code, similar to make. It is such a robust build system that it also can be used as a package manager with a binary cache


Does Steam let you control the whole dependency tree of your software, including modifying any part of it and rebuilding from source as necessary, or pushing it to a whole other machine?

Real life software is much more than just downloading a game and running it.


Pushing to another machine? Yes. By strict definition. Steam exists to sell pre-compiled proprietary programs for dollars.

Rebuilding? No. Linux package management is so-so at allowing you to compile programs. But they’re dogshit garbage at helping you reliably run that program. Docker exists because Linux can’t run software.


Docker (and also Nix) exists because it's not trivial to manage the whole environment needed to run an application.

There's a reason everyone uses it for ops these days, and not some Windows thing.


Yes. The reason is that Linux made very bad design decisions.

> it’s not trivial to manage the whole environment needed to run the application

This is a distinctly Linux problem. Despite what Linux would lead you to believe it is not actually hard to run a computer program.


Ok then where is the amazing non Linux deployment solution that everyone uses instead?

> Real life software is much more than just downloading a game and running it.

Real life software outside of Linux is pretty much just downloading and running it. Only in Linux we don't have a single stable OS ABI, forcing us to find the correct package for our specific distro, or to package the software ourselves.


Maybe for desktop use but when you want to deploy something to your server it's a bit more complicated than that.

> I’m increasingly convinced that the mere existence of a package manager (for programs, not source code) is a sign of a failed platform design

> Steam does not have this problem. Download game, play game.

These statements seem contradictory. Steam is a package manager. So is the Apple App Store. Sure, they have different UX than, say, apt/dnf/brew/apk/chocolatey, but they're conceptually package managers.

Given that, I'm unclear what the gripe is (though I'm totally down to rip on Snap/Flatpak; I won't rant here, but I did elsewhere: https://news.ycombinator.com/item?id=44069483). Is the issue with OS/vendor-maintained package managers? Or is the issue with package installers that invoke really complicated build systems at install time (e.g. package managers that install from source)?


This is getting into semantics. Personally I would not consider downloading a zip file from a GitHub releases page in a web browser to be using a “package manager”. But someone could try and make that argument.

None of this has formal definitions which makes it difficult to discuss.

Your rant on Snap/Flatpak was great.

The core gripe is that I want running computer programs on Linux to be easy and reliable. It is not. MacOS and Windows are far more reliable, and they don’t require (imho) package managers to do it.


> The core gripe is that I want running computer programs on Linux to be easy and reliable. It is not.

No argument here.

What's interesting, though, is that package managers on Linux are the attempted solution to that problem. Without them, hand-managing dependencies and dependency discovery via the "download a zipfile from GitHub" approach just falls apart: said zipfile often wants to link against other libraries when it launches.

Windows (and runtimes like Golang) take a batteries-included approach by vendoring many/most dependency artifacts with binary distributions. MacOS app bundles do a bit of that, and also have a really consistent story about what system-level dependencies are available (which is only a feasible approach for MacOS because there's a single maintainer and distributor of the system).

But even on those platforms, things break down a lot! There are all sorts of problems for various Windows apps that need to be solved by "acquire so-and-so.dll and copy it into this app's install folder, or else its vendored version of that dll will break on your system". Homebrew on MacOS exists (and has highly variable complexity levels re: installation/dependency discovery) precisely because the amount of labor required to participate in the nice app bundle/MacOS native-app ecosystem is too great for many developers.

That said, there's not really a punchline here. It's complicated, I guess?

> Your rant on Snap/Flatpak was great.

Thank you!


It’s not a package manager. It’s a project manager.

> But if your configuration files devolve into DSL, just use a real programming language already.

This times a million.

Use a real programming language with a debugger. YAML is awful and Starlark isn’t much better.


> Use a real programming language with a debugger. YAML is awful and Starlark isn’t much better.

I was with you until you said "Starlark". Starlark is a million times better than YAML in my experience; why do you think it isn't?


My experience with Starlark (buck2) is that it makes the whole system wildly complex and inscrutable.

No one actually knows how it works. It’s an undebuggable nightmare of macros. Everyone copy/pastes a few macros they know work. But one step off the beaten path and you’re doomed.

I hate code that looks like data but is infect code. Be data or be code. Don’t pretend to be both.

I tried adding support for Jai to public buck2. I didn’t even get close. I need static types and a debugger. Just make everything a Rust plugin.


bonus points when you start embedding code in your yamlified dsl.

This was awesome. And the floating preview thingy worked great. Major kudos!

> It's just art style.

Art styles aren't picked randomly out of a hat. Humans are pattern matching machines and will draw conclusions based on choice of art style or mascot.

> classic corporate caricature of a person with unnatural body proportions

Corporate Memphis is an abomination and I harshly judge any company that uses it. Everyone hates Corporate Memphis and makes fun of it.


IMHO Corporate Memphis is much worse than anime art. "You have small brain" is the message it seems to imply.

I don't necessarily disagree. But in either case the point is "it's just an art style" is wrong. People make judgement calls based on art. It's just human nature.

IMO both styles have that message.

Git is fundamentally broken and bad. Almost all projects are defacto centralized. Your project is not Linux.

A good version control system would support petabyte scale history and terabyte scale clones via sparse virtual filesystem.

Git’s design is just bad for almost all projects that aren’t Linux.

(I know this will get downvoted. But most modern programmers have never used anything but Git and so they don’t realize their tool is actually quite bad! It’s a shame.)


> A good version control system would support petabyte scale history and terabyte scale clones via sparse virtual filesystem.

I like this idea in principle but I always wonder what that would look in practice, outside a FAANG company: How do you ensure the virtual file system works equally well on all platforms, without root access, possibly even inside containers? How do you ensure it's fast? What do you do in case of network errors?


NFS server not responding. Still trying...

Tom Lyon: NFS Must Die! From NLUUG 2024:

https://www.youtube.com/watch?v=ZVF_djcccKc

>Why NFS must die, and how to get Beyond FIle Sharing in the cloud.

Slides:

https://nluug.nl/bestanden/presentaties/2024-05-21-tom-lyon-...

Eminent Sun alumnus says NFS must die:

https://blocksandfiles.com/2024/06/17/eminent-sun-alumnus-sa...


Someone just needs to do. Numerous companies have built their own cross-platform VFS layers. It’s hard but not intractable.

Re network errors. How many things break when GitHub is down? Quite a lot! This isn’t particularly special. Prefetch and clone are the same operation.


Yeah we're at the CVS stage where everyone uses it because everyone uses it.

But most people don't need most of its features and many people need features it doesn't have.

If you look up git worktrees, you'll find a lot of blog articles referring to worktrees as a "secret weapon" or similar. So git's secret weapon is a mode that lets you work around the ugliness of branches. This suggests that many people would be better suited by an SCM that isn't branch-based.

It's nice having the full history offline. But the scaling problems force people to adopt a workflow where they have a large number of small git repos instead of keeping the history of related things together. I think there are better designs out there for the typical open source project.


I don't understand what you mean by "the ugliness of branches".

In my experience, branches are totally awesome. Worktrees make branches even more awesome because they let me check out multiple branches at once to separate directories.

The only way it could get better is if it somehow gains the ability to check out the same branch to multiple different directories at once.


> ability to check out the same branch to multiple different directories at once.

So you want shared object storage, but separate branch metadata. That's git clone with hardlinks, which is what Git does locally by default.


Git worktrees won't allow me to check out a branch twice though. I wonder if there's some technical limitation that prevents it.

That is, because the metadata is shared between worktrees. So you when you modify a branch in one worktree, it isn't modified per worktree, but in the whole repo. So what you need to do is to duplicate the branch metadata. That's what git clone does. You essentially have these cases:

    shared worktree, shared branch/index data, shared object storage  -> single repo, single worktree
    separate worktree, shared branch/index data, shared object storage  -> repo, with worktrees
    separate worktree, separate branch/index data, shared object storage  -> git clone
    separate worktree, separate branch/index data, separate object storage  -> git clone --no-hardlinks
You can checkout a commit twice though. What I don't get is what checking out a branch twice gets you. As soon as you add a single commit, these branches will be different, so why not just create another branch? Branches in git are cheap.

Makes sense to me, thank you. It's just something I tried to do once. I didn't think very hard about it so it felt like an arbitrary limitation.

What I tried to do was have two copies of the same branch checked out to a different directory: one pristine, another with uncommitted changes. The idea was to run make on both directories, profile the results and then decide whether to commit or discard the changes. Now I see the solution is to just make another branch.


Yeah in this case I wouldn't probably even bother with branches and only checkout the commit. Another way to solve this problem are of course VPATH-builds.

Git now has artificial feet to aim the foot guns at so they hit the right target.


Completely disagree. Git is fundamentally functional and good. All projects are local and decentralized, and any "centralization" is in fact just git hosting services, of which there are many options which are not even mutually exclusive.


Got works fine and is solid and well enough known to be a reasonable choice for most people.

But I encourage everyone to try out a few alternatives (and adopt their workflows at least for a while). I have no idea if you have or not.

But fine has never used the alternatives, one doesn’t really know just how nice things can be. Or, even if you still find fit to be your preferred can, having an alternative experience can open you to other possibilities and ways of working.

Just like everyone should try a couple of different programming languages or editors or anything else for size. You may not end up choosing it, but seeing the possibilities and different ways of thinking is a very good thing.


Yeah, the decentralized design is incredibly useful in the day-to-day, for ~any project size.


Incorrect. All the features you think are associated with the D in DVCS are perfectly accessible to a more centralized tool.


Either you have all data local or you need to send the data around as soon as you traverse or modify history. No not having everything local would be a massive downside for some operations.

Nonsense. Be specific. Google, Meta, and every large game studio on the planet use large centralized monorepos. It's not only fine, it's great!

If you use Git it's not possible to "have everything" because Git is not capable of storing everything that needs to be version controlled. Most devs their version control is actually a combination of Git, docker images, and lord knows how many other data sources. It's a miserable nightmare. Good luck and God speed to anyone who tries to build something from just a few years ago, never mind a 10+ year old project!


I'm not talking about what needs to be in Git and what not, I mean for git functionality that analyzes the history (git blame), the data needs to be traversible, i.e. available locally, fetching it over the wire will introduce massive latency. You claimed it doesn't need to be.

> All the features you think are associated with the D in DVCS are perfectly accessible to a more centralized tool.


Git Blame sucks and is far far far far far far worse than Perforce Timelapse View.

You moved the goal posts a little bit. You are correct that if you want to work in a cave with no internet then you do need every bit of data to perform at least some operation. That’s not what the D in DVCS means though.


That's exactly what the D means: All the repo data is on every machine, the repo is complete in itself and no repo is any different than the other.

For me connecting to the Internet is very much a choice. Doesn't mean I wouldn't have access if I didn't want to, but why would I need it most of the time?


One of the features I think is associated with the D is the ability to work offline with full access to everything in my repository.

Are you missing the central hosting services provide a good backup plan for your locally hosted git?


I agree! They are excellent git backup services. I use several of them: github, codeberg, gitlab, sourcehut. I can easily set up remotes to push to all of them at once. I also have copies of my important repositories on all my personal computers, including my phone.

This is only possible because git is decentralized. Claiming that git is centralized is complete falsehood.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: