Hacker News new | past | comments | ask | show | jobs | submit login

Conceptually. What are they actually doing, and why is it faster than other RPC techniques?



Think of it in terms of REST. A door is an endpoint/path provided by a service. The client can make a request to it (call it). The server can/will respond.

The "endpoint" is set up via door_create(); the client connects by opening it (or receiving the open fd in other ways), and make the request by door_call(). The service sends its response by door_return().

Except that the "handover" between client and service is inline and synchronous, "nothing ever sleeps" in the process. The service needn't listen for and accept connections. The operating system "transfers" execution directly - context switches to the service, runs the door function, context switches to the client on return. The "normal" scheduling (where the server/client sleeps, becomes runnable from pending I/O and is eventually selected by the scheduler) is bypassed here and latency is lower.

Purely functionality-wise, there's nothing you can do with doors that you couldn't do with a (private) protocol across pipes, sockets, HTTP connections. You "simply" use a faster/lower-latency mechanism.

(I actually like the "task gate" comparison another poster made, though doors do not require a hardware-assisted context switch)


Well, Doors' speed was derived from hardware-assisted context switching, at least on SPARC. Combination of ASIDs (which allowed task switching with reduced TLB flushing) and WIM register (which marked which register windows are valid for access by userspace) meant that IPC speed could be greatly increased - in fact that was basis for "fast path" IPC in Spring OS from which Doors were ported into Solaris.


I was (more) of a Solaris/x86 kernel guy on that particular level and know the x86 kernel did not use task gates for doors (or any context switching other than the double fault handler). Linux did taskswitch via task gates on x86 till 2.0, IIRC. But then, hw assist or no, x86 task gates "aren't that fast".

The SPARC context switch code, to me, always was very complex. The hardware had so much "sharing" (the register window set could split to multiple owners, so would the TSB/TLB, and the "MMU" was elaborate software in sparcv9 amyway). SPARC's achilles heel always were the "spills" - needless register window (and other cpu state) to/from memory. I'm kinda still curious from a "historical" point of view - thanks!


The historical point was that for Spring OS "fast path" calls, if you kept register stack small enough, you could avoid spilling at all.

Switching from task A to task B to service a "fast path" call AFAIK (have no access to code) involved using WIM register to set windows used by task A to be invalid (so their use would trigger a trap), and changing the ASID value - so if task B was already in TLB you'd avoid flushes, or reduce them only to flushing when running out of TLB slots.

The "golden" target for fast-path calls was calls that would require as little stack as possible, and for common services they might be even kept hot so they would be already in TLB.


Is the Spring source available anywhere?


Unfortunately, as far as I know, not. Only published articles and technical reports :(


I once had the university CD-ROM release, but I've lost it, sadly.


So if I understand it correctly, the IPC advantage is that they preserve registers across the process context switch, thereby avoiding having to do notoriously expensive register saves and restores? In effect, leaking register contents across the context switch becomes a feature instead of a massive security risk. Brilliant!

Is that it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: