Hacker News new | past | comments | ask | show | jobs | submit login

Are we talking about Plan 9 or Lisp Machines?



Both fit the bill at different time references. Plan 9 is closer to what we would expect a modern computer would be, with network transparency built-in down to its very core. It's natural to spread your use across multiple machines from a single seat. Authentication, authorization and securing the channel against eavesdropping was not as big a concern then as it is now when every network should be treated as it's public and hostile.

I'm not sure how a Lisp machine of Symbolics or LMI heritage or a Smalltalk system would deal with distributed functionality across untrusted networks. From the language PoV, it seems natural to Smalltalk.


I surely wouldn't like my GPGPU at the end of a network socket, instead of shared memory IPC.


If the data resides on the same node as the GPGPU and you only do queries on it (as you would with a remote Jupyter or something like it) with little data being moved across the network, there is no reason its memory should be directly visible to the local CPU and little benefit if it is.

The only thing is that your OS will have to do some cluster management to make sure data will be close to the programs using it.


If data is being transferred frequently between RAM, the CPU, or the GPU, it will kill performance anyways. So maybe a GPU is actually one of the better things to have over a socket..


If your OS is able to deal transparently with heterogeneous hosts across different network transports, then the GPU and its memory will end up being treated as just another compute node hooked up to one of the other compute nodes by a ridiculously fast network.


WebGL 2.0 is the best we got so far and it isn't impressive versus what modern GPUs are capable of.


Why? Remote GPU compute is a totally viable solution for many classes of problems, and systems like QNX or Plan 9 would actually do it properly and allow you to have powerful setups such as thin clients with little configuration, without constantly reinventing so many wheels.

Case in point, the other commentator mentioning Jupyter is an excellent example. Jupyter is kind of a classic problem where something like QNX would shine: it's a multi-process system that we expose over HTTP to get remote transports to other clients. In QNX, IPC is the remote transport, as well as the local transport, so the distinction between running the Jupyter notebook/kernel locally, split distributed, or all of it entirely on another machine is relatively transparent. This goes all the way up and down the stack -- from the core process layer to the GUI itself (so even GUI programs could be remote, and the desktop protocol proxies the command buffers to you to render locally.) Jupyter, as a system, always has an underlying transport layer for talking between processes, computing and transferring results. So your "GPGPU" being at the other end of a network socket is already a very common case, in fact -- one that it is designed for explicitly (for basically anyone who does DL, for instance.)

In something like QNX I'd be able to simply type the command `jupyter notebook` and the kernel would start on the machine in my other room (Threadripper with a nice GPU) and the notebook UX itself would start locally, they would talk immediately (due to policy/authorization being baked into the IPC/user/process mechanisms -- no HTTP Auth, etc) and there would be no need at the API layer to distinguish between local shared memory or remote network transports. It would always just work. I could boot up a GCloud machine with 8x TPUs and a $10,000 CPU and just "add" it to my network, run Jupyter again, and it would all be the same (except some latency, of course). I could just use a Raspberry Pi as my thin client for most purposes, honestly. Compute resources would be completely disaggregated, more or less.

Jupyter already does things like "compute the ggplot2 of some data on a remote machine, convert to png, tunnel it over HTTP into browser for display" -- what's the difference between using a socket and HTTP? Not much. You could even use HTTP as the layer-7 protocol over QNX IPC, if you wanted...

It's probably not a coincidence that the rise of HTTP as an L7 application layer protocol has happened and exploded in popularity, in retrospect. Remote compute is a vital component of many systems today, and HTTP is one of the easiest ways to accomplish it thanks to the ubiquity of browser protocols (think of how much stuff tunnels over HTTPS now!) All mainstream operating systems make very hard distinctions between remote and local IPC mechanisms -- so you might as well use HTTP, and bind to /run/app/local.sock or 0.0.0.0:443 and just issue GET requests. Boom, you have a local and remote system. It's the easiest way to get "local" and "remote" application transport all in the same purchase, even if it's error prone and crappy as hell.

And, of course, if you are playing a game -- there's nothing stopping you from running everything locally at native speed!

Instead of systems like QNX which elegantly handle distributed computing at the core of the IPC/process/network mechanism in a single place, though -- we basically look doomed to reinvent all of it over bespoke transport/application/distribution protocols throughout the stack. It's a huge shame, IMO.


I said a GPGPU, something capable of DirectX12, Metal, Vulkan, LibGNM fillrates of GBs per second, not a 2D HTML 5 canvas or WebGL 2.0 dropping frames on hardware capable of running GL ES perfectly fine in native code.


If anyone gets interested in using QNX, there is a tutorial:

https://membarrier.wordpress.com/2017/04/12/qnx-7-desktop/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: