Linux port if there is one is usually done by a third party porting studio, which is not necessarily at the same quality as the original codebase. Also the devs just don't have the manpower/bandwidth to spare for Linux users given how small this community is.
It's better value for money for both the gamers and the devs if the devs just choose to engage with valve and get their game running perfectly under proton.
The answer is pretty simple here - hire CodeWeavers to work on supporting your game in Proton/Wine rather than some other porting shop doing an old rewrite-style port.
To be you should compare the windows version on windows, no proton against the Linux version.
DXVK, which proton uses, makes some games run better in windows than "native".
I don't know if you meant it but you seem to imply that that vocal minority only cried but didn't buy the small phones?
You know a minority is a _minority_, even if everyone in that minority bought a iPhone Mini, the sales number is still not going to be high.
(Edit: just checked, in 2022, 3% of iPhones sold were 13 Minis. not high but surely someone out there can run a sustainable business out of that 3% of mobile phones)
The lack of mention of the Church-Turing thesis in both papers suggest he hasn't even considered that angle.
But it is the fundamental objection he would need to overcome.
There is no reasonable way to write papers claiming to provide proofs in this space without mentioning Church even once, and to me it's a red flag that suggests a lack of understanding of the field.
Does this analogy work? It's exceedingly hard to make new low-background steels, since those radioactive particles are everywhere. But it's not difficult to make AI-free content - well just don't use AI to write it.
People do. I do, for instance. My blog is self-hosted, entirely human-written, and it is done for the sake of enjoyment. It doesn't cost much to host. An entirely static site generator would actually be free, but I don't mind paying the 55¢/kWh and the $60/month ISP fee to host it.
That only begs the question of how to verify what content is AI-free. Was this comment generated by a human? IIRC, one of the big AI startups (OpenAI?) used HN as a proving ground--a sort of Turning Test platform--for years.
Once your video is out in the wild there’s as of yet no reliable way to discern whether it was AI-generated or not. All content posted to public forums will have this problem.
Training future models without experiencing signal collapse will thus require either 1) paying for novel content to be generated (they will never do this as they aren’t even licensing the content they are currently training on), 2) using something like mTurk to identify AI content in data sets prior to training (probably won’t scale), or 3) going after private sources of data via automated infiltration of private forums such as Discord servers, WhatsApp groups, and eventually private conversations.
There is the web of trust. If you really trust a person to say that their stuff isn't AI, then that's probably the most reliable way of knowing. For example, I have a few friends and I know their stuff isn't AI edited because they hate it too. Of course, there is no 100% certainty but it's as certain as knowing that they're your friend at least.
But the question is about whether or not AI can continue to be trained on these datasets. How are scrapers going to quantify trust?
E: Never mind, I didn’t read the OP. I had assumed it was to do with identifying sources of uncontaminated content for the purposes of training models.
Ladybird is building a browser with a couple million dollars, why can't Mozilla do the same? The total amount of donation Ladybird got so far probably isn't enough to pay half of a year's salary of Mozilla's management, this is absurd.
We don't need strategy, we don't need vision, or any of that kind of bullshit. We just need a team of good developers sitting down and write a browser.
"The symphony doesn't need a conductor; they just need to play good music." Leonard Bernstein was a fraud who added no value...
"The sailors on board a ship don't need a captain; they just need to sail." Admiral Nelson could have been replaced by democratic voting and focus groups.
Reality: Anarchy only works in very small settings.
> Ladybird is building a browser with a couple million dollars, why can't Mozilla do the same?
1. Ladybird isn't currently competing with Mozilla or anyone, so this is hypothetical.
2. It's a different problem space, because Mozilla is maintaining a decades old browser.
3. Mozilla is developing a very good browser considering how much money they have. They're competitive with Chrome on features and performance, while Google is dedicating much more money towards Chrome.
nix solves the shared library incompatibility problem by being extremely conservative. every time anything changes, consequential or not - a comment got modified, documentation changes, a testcase got added, etc. - it will rebuild all dependents. and not just that, but all dependents of dependents, and dependents of dependents of dependents, on and on. this often results in massive massive rebuilds.
sure you are not going to get shared library conflicts, but i think this solution is extremely wasteful, and can make development painful too - look at nixpkgs' staging process.
The reason someone changes a dependency at all is because they expect a difference in behavior. No one would feel the motivation to go update a dependency if they aren't getting something out of it, that's a waste of effort and an unnecessary risk.
Each person doesn't have to perform the build on their own. A build server will evaluate it and others will pull it from the cache.
The greater waste that nix eliminates is the waste of human time spent troubleshooting something that broke in production because of what should have been an innocent change, and the lost business value from the decreased production. When you trust your dependencies are what you asked for, it frees the mind of doubt and lets you focus on troubleshooting more efficiently towards a problem.
Aside, I spent over a decade on Debian derived distros. I never once had one of these distros complete an upgrade successfully between major versions, despite about 10 attempts spread over those years, though thankfully always on the first sacrificial server attempted. They always failed with interesting issues, sometimes before they really got started, sometimes borking the system and needing a fresh install. With NixOS, the upgrades are so reliable they can be done casually during the workday in production without bothering to check that they were successful. I think that wouldn't be possible if we wanted the false efficiency of substituting similar but different packages to save the build server from building the exact specification. Anything short of this doesn't get us away from the "works on my machine" problem.
Yeah they may be reliable _for you_. And do note this reliability doesn't come automatically with Nix's model, it is only possible because many people put a lot of effort into making it working correctly.
If you use the unstable channels, you would know. My NixOS upgrades break _all_ the time. On average, probably once a month.
Yep, anyone not getting how absolutely huge the Nix model is should just install the whole KDE desktop, the Gnome desktop, and uninstall both. Only nix can make it basically a no-op.
Nix has support for bit reproduction and will not rebuild on comments if you specify it.
Of course lots of software isn't ready for but reproduction which is why Nix has taken such a pragmatic approach. (I have written a lot about this).
It's all a series of tradeoffs. If your goal is reproducibility (as close as you can get), you will have a larger graph likely ..since you are accounting for more!
Sometimes we like to believe we can have our cake and eat it too rather than understand life's a series of tradeoffs.
When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
IIUC you are talking about CA-derivations? Yeah they may help but it's hard to know how much since it's not in production yet, despite being part of Eelco's original paper describing nix. So my hope isn't high.
> When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
True but we kind of just stopped looking. and I feel much of the solution space hasn't been explored.
I mean, there is no other way that guarantees correctness across arbitrary tools/functions/builds. Like, what if I have a step that replaces certain comments with code?
Also, the primary way to develop with Nix is to create your exact, reproducible environment in the form of a shell, and then develop there using the usual, language-idiomatic iterative way.
But now you can actually have a very specific compiler-flag for only a single dependency mixed with a full different libc working in a given shell 100%, for you and everyone else, instead of iterating through nodejs and npm version combination to start working on this new project, taking a couple of days..
> show humans are capable of what you just defined as generalizable reasoning.
I would also add "and plot those capabilities on a curve". My intuition is that the SotA models are already past the median human abilities in a lot of areas.
In the context of this paper, I think "generalizable reasoning" means that finding a method to solve the puzzle and thus being able to execute the method on puzzle instances of arbitrary complexity.
It's better value for money for both the gamers and the devs if the devs just choose to engage with valve and get their game running perfectly under proton.
reply