Hacker Newsnew | past | comments | ask | show | jobs | submit | noirscape's commentslogin

There's exactly three reasons for people to stick to Twitter:

* They don't care/agree with the policies of the guy running it.

* Legacy reasons; either they have no reason to leave (automated org accounts keep running until something in the workflow breaks) or they have an existing community that doesn't want to move. This group will eventually leave but is currently stuck with inertia. Most "public service" accounts are in this category.

* And finally, for artists, Bluesky is undesirable as a platform because it has some very aggressive image compression compared to Twitter (2000x2000 is the absolute limit). Some are dualposting to Bluesky, but are unlikely to fully leave Twitter for this reason.

Finally, I'll note that while accounts are generally abandoning Twitter, this doesn't automatically mean they're moving to Bluesky either. A lot of those service accounts just up and vanished and just said "well, go visit our website".


> * They don't care/agree with the policies of the guy running it.

I don't care, I care that even though I follow/get followed by CS / Math people and still see mostly far right / nazi / trump /crypto comments about everything. In even small threads about very technical stuff, always people come up with the most crazy shit. And these days the almost mandatory 'Grok, is this true/profound/worth anything/etc'. It's just annoying and maybe I shouldn't care. Don't have that experience on other platforms (mostly same following/followers as they are also there).


Ever since Musk took over Twitter, the replies have been useless. It used to be that you can follow replies as a discussion. Now, the replies are a mix of unmarked ads, bots, and weird call-outs to Grok, and maybe all at the same time.

All the automation and algorithmic garbage crowd out actual human discussion. It's a big reason why I stopped using Twitter myself. If they want to optimize for bots and weirdos driving up engagement numbers, go for it. But that's not a service that brings me value.


On the note of artists, I always wondered why so many artists don't use a proper gallery/portfolio in addition to social media. This could be a general art-sharing platform, one of the many niche- or fandom-specific gallery sites, or their own website. Get the audience and reach through social media, but link back to a portfolio with the originals for those who care.

To be fair, the sites resorting to extreme anti-bot measures are also often not the ones that are a clientside JavaScript hell.

Thinking of the most extreme option (throwing proof of work checks at browsers), the main stuff that jumps to mind is sites like sourcehut, Linux Kernel Archives and so on and the admins for all of those sites have noted that the traffic they get is far outside of expectations[0]. Not whatever blogspam ended up on the top of Google search that day.

The badly designed sites are often the ones that don't care about their bandwidth anyways.

[0]: https://drewdevault.com/2025/03/17/2025-03-17-Stop-externali...


Most of the US's power is contingent in part on them not rocking the boat too much. That's what the current US administration is destabilizing; by harming its own allies, those allies in turn are starting to look for other options. With the exception of IT software, the US has little dominance in any individual sector. IT hardware largely isn't produced in the US, it's produced in Taiwan (TSMC) and Europe (ASML), and then assembled in China.

Payment services aren't a source of US power, they're a consequence; the US allowed itself to be a delivery market, making those payment services a soft requirement for anyone dealing with the US. If anything, the US payment networks are generally seen with scorn outside the US; they're painfully dated (US banks largely rely on a system designed for physical cheques to this day) and the companies running them are often subject to the whims of astroturfing activists, resulting in legal transactions being blocked because someone thinks buying porn is icky, even though it's legal. US payment companies are also notorious for being hard to get a hold of to enforce your rights as a customer; Paypal has to follow several EU laws, but they mostly dodge enforcement by putting their HQ in Luxembourg, which is so small that they can effectively employ all well-paid financial lawyers in that country, leaving any duped customers with very little options because of conflicts of interests.

Most of the worlds dependence is on the US as a delivery market; if the US stops being attractive (ie. because it's too expensive bc of tariffs for US importers to buy goods), then the world will gradually compensate, even if it is economically unpleasant for a while. The only other dependence is military, but don't worry there; the US is doing a great job making it's military allies realize that it's bad at helping them, since POTUS is actively interested in working with the enemies of the US instead.

Offshoring is going to become more common because of how the tariffs are blanket rates; if I am going to make a product, there's only two possible options to avoid tariffs as much as possible: only import primary goods to the US and process everything on-site (keep in mind, you're still paying a tariff for even these materials). That's very expensive, in part because American labour is expensive. It's also not very realistic; your average product these days flies it's components across several countries before it ends up being put together. Even if you source all your manufacturing locally, you're still dealing with the fact your suppliers don't. The other option is to... just pay the tariffs at the end of the supply chain. Raise prices on US customers, try to route your entire production chain around the US. That's what the great brands are doing right now. They aren't going to publicly declare price increases if they can help it (because the risk of political retaliation is real under the current US administration), but expect the next products in their pipeline to have significant price increases to make the customer eat the tariffs. This in no small part happens also because ultimately, the tariffs are seen as temporary; moving a production pipeline entirely to the US can take up to a decade. Most companies are assuming that the tariffs will be gone in ~3.5-4 years when the administration leaves. That's not worth setting up a real production pipeline for in the US.


this guy gets it. The US is literally creating the market for their own replacement(s)!

The problem with game pass is that it takes the Spotify model to games. In practice, it doesn't seem to scale well - Microsoft has seemingly hit a market cap of ~35 million users because of a lot of existing aversion to subscription services in games, which isn't enough customers to actually amortize the cost of development, even at an indie scale.

Indie developers in particular don't like Game Pass because it apparently pays Spotify-tier rates, which is pretty bad. Spotify gets away with it because it took a deal with all big music labels for more favorable payouts, but your average indie band on Spotify makes absolutely zilch from your Spotify subscription, even if you listen to them 24/7 every year. Indie bands typically compensate with concerts and brand merchandise, but that isn't an option for games - secondary income sources are typically reviled (microtransactions in paid games) or don't sell to expectations (merchandise). The Spotify model only "works" because they shifted the music industry to rely primarily on those "side" sources (and even then there's a lot of disgruntled musicians who are unhappy with the Spotify model devaluing their craft).


It's true that Game Pass subscriber growth has slowed, but I don't think 35 million is any kind of permanent cap. There are 910 million PC gamers in the world today, and this is growing by approximately 35 million per year. This is, of course, in addition to Xbox owners. As more people become PC gamers each year, more people discover and subscribe to Game Pass. Ditto for existing gamers who discover Game Pass, or decide to finally try it and stick. Tastes and expectations are changing, and just as we accepted subscriptions for music, I think subscriptions for gaming are becoming more normal.

I've gotten tremendous value out of GamePass. I very rarely replay games, so all the games that swing by long enough on GamePass for me to play through and enjoy (and that I would otherwise never have bought) have made the subscription model work really well.

It has also reduced my game "clutter" in a way I very much appreciate.


I had issues with similar things for a couple years too. The reality is that there's remarkably little existing advice for maintaining a soft fork that doesn't intent to upstream patches. (For reference, probably the most notable patch fork that can't/doesn't upstream anything, GNU IceCat, uses a bash file from hell to apply all of it's changes to the Firefox source code - it is not a scalable solution.)

Ultimately the solution I ended up using was git rebase; it just works the nicest out of all of them:

* Your patches are always kept on top at the git log.

* It's absolutely trivial to drop an unnecessary patch, add a new one in the chain or to merge two patches that do the same thing. (Use git rebase -i for that.) Fixing typos in patches is trivial too.

* Your history isn't so important for a patch fork; the patches are what matters, so don't fret too much about commit hashes changing. I promise you, it'll be fine.

* Git will complain if you try to do a rebase that doesn't work out of the box, using similar tools as resolving merge conflicts. You can instantly do a git pull from another upstream that rebases with git pull --rebase upstream/master . This does assume you've added the upstream as a second origin to git under the name upstream and that they push the code you want to patch onto the master branch.

As for drawbacks, I only wound up with two:

* CI tools and git server UIs often aren't prepared to handle a heavily rebased master branch - it leads to lots of builds that are linked to dangling commit hashes. GitHub also for some reason insists on displaying the date of the last rebase, rather than the date of when the patch was committed. Not sure why.

* Pushing your fork means heavy use of force pushes, which feels instinctively wrong.

The drawback isn't large enough for me to mind it in practice.

Opted to use rebase for this sort of fork after reading a bit about non-merge related git flows and wondering what'd happen if I did a rebase-based workflow but just... never send any patches. Turns out it works really well.


Yeah, using the real repository and rebasing atop the release commit has always seemed fine to me, provided the project uses Git. And if you want to keep track of the patches on old versions, just tag them—if upstream has tag 1.2.3, tag 1.2.3+chrismorgan or similar. This occasionally messes with build scripts—but then, not tagging sometimes does too.

> GitHub also for some reason insists on displaying the date of the last rebase, rather than the date of when the patch was committed. Not sure why.

Sounds like you’re running into the difference between author and committer, which Git models distinctly.


It's a different model of development, leading to different expectations.

BSD ties the kernel and the software on top of it together pretty heavily, creating the expectation that the documentation should cover all of it.

Linux is meanwhile kernel and software kept separated, meaning that the documentation usually winds up assembled from separate tools, each with their own standards.


Yes, BSD is a single coherent system but so are many Linux distros. It's just that we've come to accept bad documentation as the norm for Linux-based tools. In my experience there's several types of problems that are very common for Linux tools:

* Extremely short documentation. Everyone has seen these, a tool where the man page exists but provides almost no actual information.

* Unfriendly reference-type documentation. GNU programs are often guilty of this, coreutils certainly comes to mind. On the upside, it's usually comprehensive. But it's not good - it's a short description followed by a sequential list of every option, so the functionality is described in detail but there are no usage examples, no list of the most common options, or anything like that. Great reference, poor usage documentation.

* Too much info about ancient systems or historical details. Yes, it's great that many of these utilities are portable and can run on different systems or work with files from different systems. The man pages for zip/unzip mention MS-DOS, Minix and Atari systems, while defining the zip format as "commonly found on MS-DOS systems". The man page for less explains that it's a program "similar to more(1)" - completely useless info now - and mentions that it has some support for hardcopy terminals, again information that's not important enough for the first paragraph in 2025.

* Poor keywords in the description. There's the theoretically useful apropos command. My Xorg wouldn't start so I tried to remember how to start my wifi up. apropos 'wlan|wi-fi|wifi|wireless' doesn't mention nmcli, which I was thinking of, though it does at least provide the much more difficult iw command.

* Technical project-specific jargon that makes it easy to find the solution - if you already know it, that is. For example, Xorg documentation generally doesn't use the word "resolution". It's not in the xrandr or Xserver man page, and in the xorg.conf page it's only a reference to virtual screens. Because X uses the term screen size. That's fine, understandable and even accurate but most people would first search for 'resolution'.


I for one really enjoy the historical anecdotes you get in the "NOTES", "PORTABILITY" or even "BUGS" sections. But I do realise that my context is mostly recreational, work doesn't really require glueing POSIX commands together.


Yep. There is no "Linux Operating System." There's the Linux Kernel, and that kernel is used in tons of different OSes. It's sounds like a small nitpick but its a huge deal and a common misconception for those outside of the Linux world anytime a topic of unifying something in the Linux world comes up.

A shared or central wiki sounds nice, but could quickly end up too messy. Arch having its own makes sense, as in the case of Linux - the distro is the operating system. Arch is a different OS from Fedora, which is a different OS from Ubuntu, etc. Sure, there's a lot of overlap but they are each their own unique OS.


In the case of Debian, they have a pretty different stance when it comes to what the role of a distro is compared to Arch.

Arch is essentially completely freeform; you, the user, are going to be making a lot of technical decisions on what you want your system to look like. It's perfectly okay for Arch to ship 4 different versions of the same type of tool, as long as all 4 are being used. The Arch wiki reflects this; it's focused around giving you a lot of options, while not going too in-depth on what you'd want to do with them. Want to swap out NetworkManager for wpa_supplicant because wpa_supplicant is easier to configure from a terminal? Perfectly fine, go ahead. Most arch packages as a result don't heavily deviate from upstream unless it's absolutely necessary to get them running.

Debian uh... isn't that. Debian still offers choice, but Debian has set the unenviable goal for themselves to provide a "stable" userland experience. This means Debian offers less options, but the options they do offer are also fixed on certain versions with sometimes pretty derivative versions compares to upstream. Their documentation as a result can get much more in-depth, just by virtue of having less to cover than Arch does.

A basic example here is setting up a webserver stack (so webserver, php and mysql); on Debian, you pick between apache2(+mod_php) or nginx/php-fpm and install mysql. Debian takes care of wiring all the permissions, user groups and all that stuff and giving you a "sane" default folder capable of serving PHP scripts on port 80 that anyone can use. It's a lot easier and nginx' configuration is specifically changed to resemble the apache2 vhosts. Arch doesn't do this; arch gives you the upstream versions of all these packages and then asks you to wire them together so that they work.

It means they attract pretty different audiences as a result; Debian users value stability/set and forget (also helped by Debian release cycles basically lasting the same length as most LTS releases of other distros), while Arch users are more conditioned to having to occasionally change their config files on updates.

That's also reflected in what their wikis aim at. Debian wikis generally can be version locked to their release; Arch wiki needs constant updating as things change.

They're different extremes here; most distros usually sit on one side or the other of this sorta thing (with the only real correlation being that dpkg-based distros usually lean more towards the Debian model), but there's also the pseudo-rolling release distros like Fedora, which try to offer similar stability to Debian but much shorter release cycles, so you'll always be running something at least close to the latest version.


> Their documentation as a result can get much more in-depth, just by virtue of having less to cover than Arch does.

But the entire point is how much better Arch's wiki is than anyone else's. I've never run Arch, I've only ever used Arch's wiki to help with Debian. Doing this ironically helps you keep in mind every weird Debianism to figure out how to apply what you're reading.


Do keep in mind that the EUs approach is very different from the UK one.

The UK law is basically a "go figure it out", which inevitably leads to making shady deals with third parties that are now handling the data of citizens... privacy and data leakage issues abound.

The EU meanwhile is working on a whitelabel application that can confirm nothing other than "this user is above 18" (which they can do because the EU has national ID for basically anyone living in it. It also works for another set of age ranges, as the idea is to also use this to confirm stuff like buying alcohol) and is designed to be easy to implement for anyone without having to get approval from the EU first. (Technical specification is available here[0]). It's not perfect (last I saw, they're apparently tying it to Google Play Services for device verification), but it's a far better attempt than the UK/Australia are doing.

[0]: https://ageverification.dev


Obtainium assumes that the app developer is a trustworthy entity, when the reality behind the mobile ecosystem being as fucked up as it is primarily comes from the app developer. (Due to bad incentives made by mobile platform makers, mainly Apple.)

You need a middleman in place in case the app developer goes bad.


Very unexciting stuff; it's just your typical long-running FOSS project issues as I understand it. Lead maintainer of F-Droid is entrenched in his ways "cuz it works for me", which leads to stonewalling any attempts to change or improve the F-Droid workflow[0], but since he holds the keys to the kingdom (and the name recognition prevents forks), they keep him around.

Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.

F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)

[0]: Basically the inverse of https://xkcd.com/1172/


This is the sort of stuff that makes me want to pursue FIRE. There's so much good that could be done, but isn't because people need to be making money for someone else.

Then again who is to say that I would be a better custodian than this guy?


I like your energy; and I like your awareness that more control/different center of power may not help. This is where community-oriented leadership techniques could go a long way. To build trust, maintain peoples' roles and dignity, but to increase that awareness and enable floodlight focus (big picture) in addition to flashlight focus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: