It's you again! Hi. We talked about capabilities a while back.
> I am not sure yet how to handle this in a suitable way within a process
If this is about implementing capabilities, I think partitioned capabilities should be the default.
> Some of this is just the problem with POSIX in general. I did consider such problems (of file descriptors and of event handling)
Yes, I think the kernel is trying to do too much. The more micro/exokernel it is, the better, IMO. Doesn't reduce (essential) complexity, but gives programmers the flexibility to tackle it how they want.
I'm also curious how you're thinking of doing event handling in general, like D-Bus or something. I think IPC is best left as a point-to-point bare bones communication channel, but even then it's pretty complex as the central load-bearing construct. For events, I expect there would be a lot of shared memory usage. It would use centralized services and/or userspace-defined capabilities to restrict who can receive certain events. I'm not too concerned since it's more of a userspace concern, unlike IPC.
> If this is about implementing capabilities, I think partitioned capabilities should be the default.
I am not entirely sure, but probably.
> I'm also curious how you're thinking of doing event handling in general, like D-Bus or something. I think IPC is best left as a point-to-point bare bones communication channel, but even then it's pretty complex as the central load-bearing construct.
I dislike D-Bus. My idea does not use any kind of shared message bus.
IPC would be done as messages; any process that has a reference to a capability can send messages to that capability and can request to receive messages from that capability; so these received messages can be events. The message can contain any bytes and also capabilities. The system calls would be used to request and send such events, with parameters for blocking/non-blocking, for multiple objects at once, and for atomic wait-and-send or wait-and-receive (in order to avoid some types of race conditions).
> For events, I expect there would be a lot of shared memory usage.
I had also thought of shared memory, although my intention is to allow network transparency and proxy capabilities (although network transparency would be implemented by using proxy capabilities), so I had thought to not use shared memory.
However, shared memory may be useful, but there may be ways to allow it to work transparently without otherwise affecting the protocol, e.g. with read-only mapping and copy-on-write mapping, or for a mapping to only be accessible by receiving events. A pass-through function would also be possible, to make some proxies more efficient. These features are essentially optimizations which can be optional to implement, so if you implement a proxy that does not use them, the programs will still work even if they are unaware that it uses a proxy that is unable to share memory.
There is then also other considerations such as audio/video/input synchronization; if you display a movie (or a game, which will involve input as well) then the audio/video would be synchronized, even if one or both are being redirected (e.g. you might redirect the audio output to a EQ filter or to a remote computer, or you might redirect both together to a remote computer or recorder, or to a program that expects input from a camera).
> It would use centralized services and/or userspace-defined capabilities to restrict who can receive certain events. I'm not too concerned since it's more of a userspace concern, unlike IPC.
Who can receive certain events would be a feature of the userspace-defined capabilities. Some services can be centralized, that many programs will be capable of using, either directly or through proxies; these proxies would be used for handling security and many other features.
Some of my ideas are similar (but different in many ways) like some other designs, including a few things listed in http://www.divergent-desktop.org/blog/2020/08/10/principles-... Proxy capabilities, and the Command, Automation, and Query Language, and the common data format used for most data, and other features of the system that I intended to have, will be able to help with some of the things listed there, as well as other benefits. (My ideas are actually mostly independent of that and other documents, but some of them end up being similar to those, and sometimes my ideas can then be refined when I learn more from such documents, too.)
> I am not sure yet how to handle this in a suitable way within a process
If this is about implementing capabilities, I think partitioned capabilities should be the default.
> Some of this is just the problem with POSIX in general. I did consider such problems (of file descriptors and of event handling)
Yes, I think the kernel is trying to do too much. The more micro/exokernel it is, the better, IMO. Doesn't reduce (essential) complexity, but gives programmers the flexibility to tackle it how they want.
I'm also curious how you're thinking of doing event handling in general, like D-Bus or something. I think IPC is best left as a point-to-point bare bones communication channel, but even then it's pretty complex as the central load-bearing construct. For events, I expect there would be a lot of shared memory usage. It would use centralized services and/or userspace-defined capabilities to restrict who can receive certain events. I'm not too concerned since it's more of a userspace concern, unlike IPC.