Hacker News new | past | comments | ask | show | jobs | submit login

We cannot afford the seamless distributed systems, and I don't think we ever will.

I use Python because I don't care if adding two numbers is taking a microsecond instead of a nanosecond. But if netwotk call suddenly takes 1 sec instead of 10mS? well, thats a huge problem, let's add a memory cache and rack-level cache and a parallel fetch and a whole bunch of monitoring.

The local compute is growing much faster than internet, and even faster than local network. I sure hope we get better abstractions, but caring about remote vs local call is not going away.




You could design a OS with abstractions built around latency, instead of physical machines. It would still allow you to find and use resources according to their constraints of use, but wouldn't force you to keep track on which exact machine they are located.


I am not sure what do you need new OS for, and what could you get from OS that you cannot get from today's computing.

If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...

If you want remote calls to be indistinguishable from local calls on the source code function level, this is also solved! Many RPC frameworks and remote SDKs provide class-based interface which acts the same as the local class.

The only place where OS can help is if any OS function can be magically located on another machine. But even then.. we have remote filesystem (NFS), remote terminal and execution (ssh), remote graphics (x11), remote audio (alsa/pulse)... What is left for the new OS? process management? is it worth it?


> If you want user to know they are talking to remote machine, but don't want them to care about which exact machine it is, we have a ton of great solutions already: load balancer, connection pools, service mesh, anycast, dynamic dns, etc...

Those resources are complex to program against. An OS should offer a simplified abstraction layer to make them as transparent as possible. And yes, process management is worth having a unified programming model that doesn't force you to keep track of where each process instance is being located - that's essential for massively parallel computing.

Of could this could be done with platforms for massively parallel computing. The point of building an OS would be to put these platforms as close to the metal as possible to improve their efficiency.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: