Hacker Newsnew | past | comments | ask | show | jobs | submit | hedgehog's commentslogin

Color management infrastructure is intricate. To grossly simplify: somehow you need to connect together the profile and LUT for each display, upload the LUTs to the display controller, and provide appropriate profile data for each window to their respective processes. During compositing then convert buffers that don't already match the output (unmanaged applications will probably be treated as sRGB, color managed graphics apps will opt out of conversion and do whatever is correct for their purpose).

Yes, but why is the compositor dealing with this? Shouldn't the compositor simply be deciding which windows go where (X, Y, and Z positions) and leave the rendering to another API? Why does every different take on a window manager need to re-do all this work?

Turning the question around, what other part of the system _could_ do this job? And how would the compositor do any part of its job if it doesn't have access to both window contents and displays? I'm not super deep in this area but a straight-forward example of a non-managed app and a color-aware graphics app running on a laptop with an external display seems like it is enough to figure out how things need to go together. This neglects some complicating factors like display pixel density, security, accessibility, multi-GPU, etc, but I think it more or less explains how the Wayland authors arrived at its design and how some of the problems got there.

I'm questioning the idea that people should be writing compositors at all. Why doesn't Wayland itself do the compositing and let everyone else just manage windows?

It's like going to Taco Bell and they make you grind your own corn for your tortillas.


Why? Probably better to ask the Wayland developers that. Maybe you're right. That said, whether everyone uses the same compositor and window management is modular, or not and shared code travels as libraries, I don't think the complexity of color management is much different.

I mean, when I hear the word "compositing" I definitely imagine something that involves "alpha" blending, and doing that nicely (instead of a literal alpha calculation) is going to involve colour management.

That's on the Wayland team though. They drew up the new API boundaries and decided that all window managers would now be in the business of compositing.

If I wanted to put it most uncharitably, I'd say they decided to push all of the hard parts out of Wayland itself and force everyone else to deal with them.


You can have the tool start by writing an implementation plan describing the overall approach and key details including references, snippets of code, task list, etc. That is much faster than a raw diff to review and refine to make sure it matches your intent. Once that's acceptable the changes are quick, and having the machine do a few rounds of refinement to make sure the diff vs HEAD matches the plan helps iron out some of the easy issues before human eyes show up. The final review is then easier because you are only checking for smaller issues and consistency with the plan that you already signed off on.

It's not magic though, this still takes some time to do.


It depends how old / where, the US didn't start vaccinating widely until around 30 years ago.

We didn't have this new-fangled chickenpox vaccine during my Gen-X childhood.

Or a lot of millenials. My parents were annoyed we had to go through it while Japan had been vaccinating since the 80s.

I hate to break it to you but the original work on that topic was by Schmidhuber & Schmidhuber back in 1963.

I've been using Macs in various forms since the 80s and I've carried a Mac laptop with me nearly full time since the early 2000s. While I don't think quality is necessarily overall worse than a decade or two ago I have run out of patience for rewrites with major regressions of most of the apps I care about. For the first time in a lot of years I have a Linux laptop along side my Mac and, if all works out, I'm planning to shift all my important workflows over.

I think packaging is still mostly Penang (Malaysia).

Intel opened a packaging facility in New Mexico in January that’s supposed to their largest.

https://newsroom.intel.com/intel-foundry/updates-intel-10-la...


Most likely: Same design, marketed as the same chip. Might see differences in power consumption and thermals but not in a way that will affect normal users. Apple has relatively few designs that they use across different products so that would further mask differences. If the Intel-fabbed variant of a chip ran 5% hotter but Apple put it into Apple TV then basically nobody would notice or care.

In my one encounter with one of these systems it induced new code and tooling complexity, orders of magnitude performance overhead for most operations, and made dev and debug workflows much slower. All for... an occasional convenience far outweighed by the overall drag of using it. There are probably other environments where something like this makes sense but I can't figure out what they are.


> All for... an occasional convenience far outweighed by the overall drag of using it

If you have any long-running operation that could be interrupted mid-run by any network fluke (or the termination of the VM running your program, or your program being OOMed, or some issue with some third party service that your app talks to, etc), and you don’t want to restart the whole thing from scratch, you could benefit from these systems. The alternative is having engineers manually try to repair the state and restart execution in just the right place and that scales very badly.

I have an application that needs to stand up a bunch of cloud infrastructure (a “workspace” in which users can do research) on the press of a button, and I want to make sure that the right infrastructure exists even if some deployment attempt is interrupted or if the upstream definition of a workspace changes. Every month there are dozens of network flukes or 5XX errors from remote endpoints that would otherwise leave these workspaces in a broken state and in need of manual repair. Instead, the system heals itself whenever the fault clears and I basically never have to look at the system (I periodically check the error logs, however, to confirm that the system is actually recovering from faults—I worry that the system has caught fire and there’s actually some bug in the alerting system that is keeping things quiet).


The system I used didn't have any notion of repair, just retry-forever. What did you use for that? I've written service tree management tools that do that sort of thing on a single host but not any kind of distributed system.


Repair is just continuous retrying some reconciliation operation, where “reconciliation” means taking the desired state and the current state and diffing the two to figure out what actions need to be performed. In my case I needed to look up what the definition of a “workspace” was (from a database or similar) in terms of what infrastructure should exist and then query the cloud provider APIs to figure out what infrastructure did exist and then create any missing infrastructure, delete any infrastructure that ought not exist, and update any infrastructure whose state is not how it ought to be.

> I've written service tree management tools that do that sort of thing on a single host but not any kind of distributed system.

That’s essentially what Kubernetes is—a distributed process manager (assuming process management is what you are describing by “service tree”).


I'm not sure which one you used, but ideally it's so lightweight that the benefits outweigh the slight cost of developing with them. Besides the recovery benefit, there is observability and debugging benefits too.


I don't want to start a debate about a specific vendor but the cost was very high. Leaky serialization of call arguments and results, then hairpinning messages across the internet and back to get to workers. 200ms overhead for a no-op call. There was some observability benefit but it didn't allow for debugger access and had its own special way of packaging code so net add of complexity there too. That's not getting into the induced complexity caused by adding a bunch of RPC boundaries to fit their execution model. All that and using the thing effectively still requires understanding their runtime model. I understand the motivation, but not the technical approach.


Regardless of the vendor, it sounds like you were using the old style model where there is a central coordinator and a shim library that talks to a black box binary.

The style presented in this blog post doesn't suffer from those downsides. It's all done with local databases and pure language libraries, and is completely transparent to the user.


Yeah, the system in the blog post retargeted at Postgres would be a step up from what I've used. I'm still skeptical of the underlying model of message replay for rehydration because it makes reasoning about the changes to the logic ("flows" in the post's terminology) really hard. You have to understand what the runtime is doing as well as how all the previous versions of the code worked, the implications for all the possible states of the cached step results, and how those logs will behave when replayed through the current flow code. I think in all worlds where transactions are necessary a central coordinator is necessary, whether it's an RDMS under a traditional app or something fancier under one of these durable execution things.

In the end I'm left wondering what the net benefit is over say an actor framework that more directly maps to the notion of long-lived state with occasional activity and is easier to test.

All that said some of the vendors have raised hundreds of millions of dollars so someone must believe in the idea.


Temporal


I don't know if the bet was even particularly wrong. If they had done a little better job on performance, capitalized on the pains of Netburst + AMD64 transition, and survived long enough to do integrated 3D graphics and native libraries for Javascript + media decoding it might have worked out fine. That alternate universe might have involved a merger with Imagination when the Kyro was doing poorly and the company had financial pain. We'll never know.


I don't either. Even with their problems, they didn't miss by much.

One key factor against them, though, is that they were facing a company whose long-term CEO had written Only The Paranoid Survive. At that point he had moved from being the CEO to the chairman of the board. But Intel had paranoia about possible existential threats baked into its DNA.

There is no question that Intel recognized Transmeta as a potential existential threat, and aggressively went after the very low-power market that Transmeta was targeting. Intel quickly created SpeedStep, allowing power consumption to dynamically scale when not under peak demand. This improved battery life on laptops using the Pentium III, without sacrificing peak performance. They went on to produce low power chips like the Pentium M that did even better on power.

Granted, Intel never managed to match the low power that Transmeta had. But they managed to limit Transmeta enough to cut off their air supply - they couldn't generate the revenue needed to invest enough to iterate as quickly as they needed to. This isn't just a story of Transmeta stumbling. This is also a story of Intel recognizing and heading off a potential threat.


I always found it ironic that Intel benignly neglected the mobile CPU/SoC market and also lost their process lead despite this supposed culture of never underestimating the competition. The paranoid Intel of the 80s/90s is clearly not the one that existed going into the 2000s and 2010's


Intel missed mobile, graphics, and AI, while failing to deliver 10nm, and it was all self-inflicted. They didn't understand what was coming. Transmeta was an easily identified threat to Intel's core CPU products so Intel was more likely to pull out all the stops with above-board competing on product as well as IP infringement and tortious interference. Intel had good risk management in having a team working on evolutions of P6, if that hadn't already been a going concern (see also Timna) coming up with a competitive product in the early 2000s would have been much harder.


For securing and maintaining a complex legacy application it seems like a reasonable approach would be to move the majority into Fil-C, then hook the bits that don't fit up via RPC. Maybe some bits get formal verification, rewritten in Rust, ported to new platform APIs, whatever, but at least you get some safety for the whole app without a rewrite.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: