" If you're interested, I recently touched on the early history and original motivation in this blog post"
Appreciate the write-up. Actually a good read. I did see a convergence between you and Dresden in this:
" the external dependencies on top of which rump kernels run were discovered to consist of a thread implementation, a memory allocator, and access to whatever I/O backends the drivers need to access"
Their papers from 2006-2010 on pushing stuff into user-mode for L4 kernels kept mentioning the same three things albeit with work. Shows that other academics trying to improve status quo in kernel-user mode tech should continue to work on making those things easier to understand, modify, debug, integrate, and so on. They keep popping up as critical to unrelated projects [for obvious reasons but still].
" Since this is the Xen blog, we should unconventionally understand ASIC to stand for Application Specific Integrated Cloud."
I really wish the community didn't re-invent ASIC's definition. ASC would've done nicely. This is going to screw up search results for people researching either and filtering via titles/abstracts. I know because one threw me off for around 5-10 minutes because I skipped the abstract assuming it meant a chip and was very confused with their findings lol. "How the hell did they implement Rump Kernels on an ASIC? Where's the chip boundary in this diagram? Are these PCI devices in x86 servers?" It will get worse in near future as much cutting-edge cloud stuff is on FPGA's or leverage ASIC's. I know: I gotta live with it. Just annoying as hell...
" the ability to run unmodified POSIX-y software on top of the Xen hypervisor via the precursor of the Rumprun unikernel was born. "
Speaks to what a good job you did on your end. Back to the HN conversation, though.
"all that without fully understanding the related work. So you tend to make goal-driven assumptions, and hence "related work" generally tends to be more wrong than right. "
Interesting. I'll try to keep that in mind when reading seemingly bad ones in the future.
"btw, my papers are somewhat obsolete, the dissertation is still mostly accurate"
I was going to read... thoroughly skim... the dissertation anyway. Thanks for the tip, though, cuz I could've gotten lazy and read the paper instead. ;) I'll just use dissertation and web site when I get around to trying to learn this stuff.
"I never saw the LeVasseur paper as being in the same category as e.g. DDE. "
The connection is that the paper tried to reuse, unmodified drivers with new OS's and clients (eg stand-alone apps). Implied one could even use several OS's if one supported hardware X and the other hardware Y. That's what DDE did, albeit differently, along with what people told me you did with again different implementation strategy. That's the only connection. I mean, don't you at some point have a client (user-mode app, unikernel on Xen) use a stub/function in one space that gets redirected to code in NetBSD to execute it against the hardware? Seems like a similarity. However, this conversation has shown yours to be much more advanced and portable in design/implementation.
Similarities ended up being the main goal (driver/feature reuse), several areas of implementation (dependencies), hooking into a major BSD/Linux, and turning that into something supporting lightweight VM's. Past that, your work is totally in it's own category given its specifics and the no-fork focus. Congratulations on being original in a space where that's uncommon. And the History indicates you got to originality by focusing on not being original (reusing code). Oh the ironies of life!
"Not sure if you can manually manage a research database. Are we really not able to do that automatically in 2015, or is it just a question of nobody building the right kind of crawler/searcher? "
You probably can. Datamining is outside of my expertise, though. I have over 10,000 papers on software engineering, formal verification, INFOSEC, tech like yours, etc. Quite a few are obscure despite totally solving a category of problem. That's a start. Maybe a tech such as the open-source, DARPA-sponsored DeepDive can be used to sort it. Gives side benefit of overly-paranoids running screaming when they see "Powered by DARPA tech" in fine print under search. :O
"The other problem is that most research tends to be conducted with a "graduate-and-run" method. The professor might have a more holistic vision, but the professor lacks a) time to engage in such discussion b) a grass-roots understanding."
Maybe have a solution to that. My collection was built and individual works evangelized with no participation on those groups part. Matter of fact, some were very hard to find with me adding a few gems from late 90's just this month. The common denominator is that a description and PDF are published somewhere accessible to the public. If it's a group (eg CS department), then it's even easier to manually or automatically pull the research if they simply have a dedicated page for publications. If students don't care & professors do, then professors might be willing to send in interesting work with pre-made tags, etc. Takes little time. Could even make students do it as part of requirements with a list of tags on web-site with suggestion capability for ease of use. Others digging through the database with motivation to build can pick up abandoned ideas.
As Jeremy Epstein at NSF told me, the biggest problem will probably be getting buy-in from schools outputing high-quality stuff. Without critical mass, it's unlikely to go anywhere. However, I fight with myself over whether to push it anyway given that something good might come from it anyway much like Citeseerx's passive collection. Even if only a few use it, something really good might come out of it and I'd hate to waste that potential. Internal struggle on that one as idealism and pragmatism rarely go in same direction in this field.
Anyway, glad you like the concept. Still getting 3rd party input before going all out with it.
The community didn't reinvent "ASIC". It was my joke, but humour is difficult ... Read the conclusions of my dissertation. Then realize that the cloud is just one form of special-purpose hardware. Then be me trying to tongue-in-cheek claim that I foresaw the potential of rump kernels on the cloud 4 years earlier. Then maybe the joke will be funny. If not, well, you don't get your money back, sorry, your only condolence is that I rarely use the same joke twice.
I still don't see too much similarity between DDE and using full-OS VM's to act as DDE backends. It's like observing that unikernels and traditional timesharing operating systems can both run applications, so they're similar. Yes, but ... Anyway, I understand what you mean, disagree, and don't think it's worth debating further.
Well if it was a joke that some are running with that's another matter. Might get a good laugh out of it later. Far as the discussion, yeah it's a wrap and appreciate your clarifications on things.
Appreciate the write-up. Actually a good read. I did see a convergence between you and Dresden in this:
" the external dependencies on top of which rump kernels run were discovered to consist of a thread implementation, a memory allocator, and access to whatever I/O backends the drivers need to access"
Their papers from 2006-2010 on pushing stuff into user-mode for L4 kernels kept mentioning the same three things albeit with work. Shows that other academics trying to improve status quo in kernel-user mode tech should continue to work on making those things easier to understand, modify, debug, integrate, and so on. They keep popping up as critical to unrelated projects [for obvious reasons but still].
" Since this is the Xen blog, we should unconventionally understand ASIC to stand for Application Specific Integrated Cloud."
I really wish the community didn't re-invent ASIC's definition. ASC would've done nicely. This is going to screw up search results for people researching either and filtering via titles/abstracts. I know because one threw me off for around 5-10 minutes because I skipped the abstract assuming it meant a chip and was very confused with their findings lol. "How the hell did they implement Rump Kernels on an ASIC? Where's the chip boundary in this diagram? Are these PCI devices in x86 servers?" It will get worse in near future as much cutting-edge cloud stuff is on FPGA's or leverage ASIC's. I know: I gotta live with it. Just annoying as hell...
" the ability to run unmodified POSIX-y software on top of the Xen hypervisor via the precursor of the Rumprun unikernel was born. "
Speaks to what a good job you did on your end. Back to the HN conversation, though.
"all that without fully understanding the related work. So you tend to make goal-driven assumptions, and hence "related work" generally tends to be more wrong than right. "
Interesting. I'll try to keep that in mind when reading seemingly bad ones in the future.
"btw, my papers are somewhat obsolete, the dissertation is still mostly accurate"
I was going to read... thoroughly skim... the dissertation anyway. Thanks for the tip, though, cuz I could've gotten lazy and read the paper instead. ;) I'll just use dissertation and web site when I get around to trying to learn this stuff.
"I never saw the LeVasseur paper as being in the same category as e.g. DDE. "
The connection is that the paper tried to reuse, unmodified drivers with new OS's and clients (eg stand-alone apps). Implied one could even use several OS's if one supported hardware X and the other hardware Y. That's what DDE did, albeit differently, along with what people told me you did with again different implementation strategy. That's the only connection. I mean, don't you at some point have a client (user-mode app, unikernel on Xen) use a stub/function in one space that gets redirected to code in NetBSD to execute it against the hardware? Seems like a similarity. However, this conversation has shown yours to be much more advanced and portable in design/implementation.
Similarities ended up being the main goal (driver/feature reuse), several areas of implementation (dependencies), hooking into a major BSD/Linux, and turning that into something supporting lightweight VM's. Past that, your work is totally in it's own category given its specifics and the no-fork focus. Congratulations on being original in a space where that's uncommon. And the History indicates you got to originality by focusing on not being original (reusing code). Oh the ironies of life!
"Not sure if you can manually manage a research database. Are we really not able to do that automatically in 2015, or is it just a question of nobody building the right kind of crawler/searcher? "
You probably can. Datamining is outside of my expertise, though. I have over 10,000 papers on software engineering, formal verification, INFOSEC, tech like yours, etc. Quite a few are obscure despite totally solving a category of problem. That's a start. Maybe a tech such as the open-source, DARPA-sponsored DeepDive can be used to sort it. Gives side benefit of overly-paranoids running screaming when they see "Powered by DARPA tech" in fine print under search. :O
http://deepdive.stanford.edu/
"The other problem is that most research tends to be conducted with a "graduate-and-run" method. The professor might have a more holistic vision, but the professor lacks a) time to engage in such discussion b) a grass-roots understanding."
Maybe have a solution to that. My collection was built and individual works evangelized with no participation on those groups part. Matter of fact, some were very hard to find with me adding a few gems from late 90's just this month. The common denominator is that a description and PDF are published somewhere accessible to the public. If it's a group (eg CS department), then it's even easier to manually or automatically pull the research if they simply have a dedicated page for publications. If students don't care & professors do, then professors might be willing to send in interesting work with pre-made tags, etc. Takes little time. Could even make students do it as part of requirements with a list of tags on web-site with suggestion capability for ease of use. Others digging through the database with motivation to build can pick up abandoned ideas.
As Jeremy Epstein at NSF told me, the biggest problem will probably be getting buy-in from schools outputing high-quality stuff. Without critical mass, it's unlikely to go anywhere. However, I fight with myself over whether to push it anyway given that something good might come from it anyway much like Citeseerx's passive collection. Even if only a few use it, something really good might come out of it and I'd hate to waste that potential. Internal struggle on that one as idealism and pragmatism rarely go in same direction in this field.
Anyway, glad you like the concept. Still getting 3rd party input before going all out with it.