I think that's the point of misunderstanding. At the risk of sounding like an LLM, this isn't about a "mode," it's about the infuriating choice made by web designers who hardwire their pages to dark themes.
So, no, it is not "easily configured in Firefox" or anything else running on the client side. When I visit various sites and have to squint at the text, that's 100% on the site designers. It may be fixable by various third-party extension hacks and kludges with numerous drawbacks of their own, but reskinning the site itself isn't something the browser can (or should) be expected to do.
Ideally, sites where the admins prefer light-on-dark text should follow Wikipedia's example, which really sets the standard IMHO, and give users a choice -- auto, dark, or light mode. Here again, 'mode' refers to an option provided by the site, with nothing whatsoever to do with client-side chrome. They are basically just giving you the option of using different curated style sheets, which is great.
It's hard to symphatize with the "dark mode hater" when it's only the very minority of websites that enforce dark mode without respecting user choice, as most websites enforce light mode without respecting user choice (including HN).
Is it though? Most apps I am using are following the OS/browser preferences or are light mode as a default. One of the more notable exceptions is Discord, but that is largely explained away by the fact that the gaming focused audience often demands dark modes.
Also, the dark mode setting in mist browsers is one search entry away. I really don't see the problem here
Well, I guess that's fine for the "gaming-focused audience," then.
For the benefit of the rest of us, explain how to turn off "dark mode" when viewing specific websites that are hardwired to use it, while running Safari on iOS, or in Firefox on desktop, without installing various extensions that may not be available to users at work, or changing the way the whole OS appears.
Edit due to rate-limiting: The Firefox theme has absolutely nothing to do with how a web page is rendered. Select the light theme and go to Hackaday, or a logged-out Mastodon page, and you will find that it looks exactly the same.
Same with the peanut gallery that always pops up with helpful advice like "Just change your OS theme." Even if that would help, which it wouldn't, I'm not going to change the global OS appearance to accommodate a few asshat web designers.
> while running Safari on iOS, or in Firefox on desktop, without installing various extensions that may not be available to users at work
Basically all browsers default to follow the operating systems appearance settings. I don't know why you're specifically asking to not change the OS settings, why would you prefer dark mode of your OS when you want the websites to be light mode?
To answer your question, I have no clue about Safari, but in Firefox you go to the settings page and right on the start page there is Language and Appearance where you can select your theme.
> when viewing specific websites that are hardwired to use it,
Yes, that is an issue. But that is not an issue of dark mode per-se, it is an issue with software quality and design decisions themselves. Some apps might implement their own theme switcher (which they should not do, but people seem to like making their own worse implementations of browser standards), others might not implement a dark or light mode to begin with.
It mostly just breaks things unfortunately. You can faff around for ages trying to figure out which devices work and which don’t but you end up with not much to show for it.
Every CEO is reading from the same script right now. It might be a bubble but it’s just like the internet, it’s still going to be relevant and it’s just the crap companies and grifters who will die.
How those AMD crashes though. All my friends in AMD CPUs have had a hell of the last two years with constant crashes in unreal engine games. Meanwhile, I made fun of myself for buying an ancient 11 series which is a decade old arch at this point but is rock solid.
Had those due to insufficient cooling in the case. Tell him to run the games without the side panel. I installed additional fans later and have had no such issue ever since. xt 7900
Contrast that with Intel's last generation of chips, all of which started failing after a similar time period. AMD only need to be better than the competition.
Their linux driver support isn't so great, though. I really considered an AMD GPU for my most recent build, and based on the driver support for just the integrated graphics on my new AMD CPU (7900X), I opted for an NVidia card instead.
How so? Switching from an Nvidia card to an AMD one I am now able to upgrade my kernel whenever without getting a blinking cursor after reboot. How are in-tree drivers worse than whatever Nvidia does?
I'm running a 6900XT on Arch and have no problems so far. Steam, Heroic launcher and every game i tried so far worked like a charm. You can even OC with LACT [1] if you want to.
Don't undersell it. The game is playable in a browser. The graphics are just blocks, the aliens don't return fire. There are no bunkers. The aliens change colors when they descend to a new level (whoops). But for less than 60 seconds of effort it does include the aliens (who do properly go all the way to the edges, so the strategy of shooting the sides off of the formation still works--not every implementation gets that part right), and it does detect when you have won the game. The tank and the bullets work, and it even maintains the limit on the number of bullets you can have in the air at once. However, the bullets are not destroyed by the aliens so a single shot can wipe out half of a column. It also doesn't have the formation speed up as you destroy the aliens.
So it is severely underbaked but the base gameplay is there. Roughly what you would expect out of a LLM given only the high level objective. I would expect an hour or so of vibe coding would probably result in something reasonably complete before you started bumping up into the context window. I'm honestly kind of impressed that it worked at all given the minuscule amount of human input that went into that prompt.
I do think that people typically undersell the ability of LLMs as coding assistants!
I'm not quite sure how impressed to be by the LLM's output here. Surely there are quite a few simple Space Invaders implementations that made it into the training corpus. So the amount of work the LLM did here may have been relatively small; more of a simple regurgitation?
>The aliens change colors when they descend to a new level (whoops).
That is how Space Invaders originally worked, used strips of colored cellophane to give the B&W graphics color and the aliens moved behind a different colored strip on each level down. So, maybe not an whoops?
Edit: After some reading, I guess it was the second release of Space Invaders which had the aliens change color as they dropped, first version only used the cellophane for a couple parts of the screen.
You can always upload your own, it's pretty simple doing so in a reproducible manner using something like Packer, but even without it you can just boot a VM into a rescue system, write your OS of choice to the VM disk and reboot.
x86 is not nearly as bad as 68k, more by lucky accident than design. As it was a stop-gap project designed to hastily extend an earlier design to 16 bit and 20-bit addressing while the proper cpu of the future (iAPX432) needed a few extra months to bake, the designers didn't have enough time to properly fubar the core of the instruction set. This made x86 assembly much less nice to write than the nearly fully orthogonal 68k, but also made it much easier to make the subset of it that was in actual use faster later.
68k designers were not being dumb when they designed it. At that time pretty much the entire industry was deep in the weeds of "closing the semantic gap", or making CPUs directly run the operations that would be encoded in high-level languages. All CPUs designed to this paradigm were doomed, and how doomed they ended up being depended mainly on how well they managed to implement it.
IBM's 801 and Patterson's RISC would blow it all up in the early 80's.
A shame, really. 68k was (and is) much more approachable for those learning assembly. No need to deal with 64k segmented memory, for instance.
As an aside… National Semiconductor also had an ill-fated architecture in the NS32000, which I also wish took off. On paper, it really did a lot right (VAX-like design, flat memory model, 32-bit almost immediately out of the gate) yet NS was woefully incapable of producing masks without bugs. It took them many tries to get it right, and before then, they already were being beat to market by their competition.
Then to add insult to injury, NS’ own compiler for NS32000 chips was producing rather unoptimized code. It took GNU porting GCC to the platform in 1987 for them to fully realize their potential, years after they missed their chance.
If NS did have their act together… dare I say an IBM PC built around their CPU would have been possible and more interesting than the 8088 they ultimately went with.
AFAIK, NS used the Green Hills compiler; at least, my ns32532 dev system comes with it. It's not great, but not terrible. I personally don't remember the compiler being in the top 5 show-stopper issues with the 32k (the first 3 were 'cpu bugs', 'mmu bugs' and 'fpu bugs'). And it was slow, particularly if you used the MMU.
The 32000 line (like the 68000) found a very long life as an embedded processor, particularly in the printer/fax space (ns32cg16 and followons, ns32gx32).
The 32332 was a nice processor. The 32532 was very, very nice. Both way too late.
Given what IBM was trying to deliver with the PC, I doubt they'd have looked at the 32000. Single source, few i/o support chips, relatively expensive, etc., etc. Way more likely that a non-Intel IBM PC would have had a Z8000 inside (and not a 68k, for mostly IBM political reasons).
That said, I’d possibly contest you on the single source issue you brought up. IBM likely would have told NS… much like they told Intel back in the day… that if they wanted to do business with them, that they needed to ensure second sourcing was possible.
Judging by how desperate NS was willing to make deals, I’m quite sure that hurdle would have been overcome quite easily, with AMD or even MOS Technology stepping up to fill the void.
If we want to pick nits, NS had a second source: TI. But that was, afaik, just paperwork at that point (and I honestly don't know if TI ever produced a 32k processor). It takes time to bring a second source on-line. And given the trouble NS had building the chips themselves, if I was IBM, I'd have Questions.
That said...even if NS could wave a magic wand and produce a second source, there were plenty of other reasons to discount the 32k, and I've never seen the slightest evidence that IBM ever considered it.
Dream away. How much weirder a dream would it be if IBM had gone Zilog? Fanbois endlessly arguing the superiority of segmented over paged memory? Terrible marketing from an oil company? Wordstar still the standard? I sorta like that multiverse option.
I’ve got a 68060 RC (MMU, no FPU) in my Amiga 1200 and it seems to work alright. The full 68060 is insanely expensive these days though, you could get a decent Ryzen for a lot less money.
That would be a 68LC060. And the 68EC060 was no MMU, no FPU. The RC just means it's a ceramic PGA package, so an MC68LC060RC50 would be a 68060 with MMU, without FPU, in a ceramic PGA206 package binned to run at 50MHz. If you have a 68060RCxx chip that doesn't have a (functioning) FPU, it's probably a relabled 'fake' LC, which is actually pretty common.
And, yeah, they're unfortunately crazy expensive, esp if you get stuck with one of those fakes.
In your specific case too, I believe there are options for soft emulation of the FPU if you needed support for one in a pinch. I can’t say how the performance is, but I’d imagine it would be insanely slow.
reply