Hacker News new | past | comments | ask | show | jobs | submit login

> One of the neat things we'll be losing with X11 is the fact that you can do graphics -- fast -- entirely with the wire protocol.

As long as you only want to do very basic graphics -- no antialiasing, no color blending, no subpixel coordinates. And it isn't even that fast; everything ends up rendered on the CPU, possibly even all on a single thread, using code paths that haven't seen much optimization in the last 20+ years.




This is why the XRender extension was introduced. There you have antialiasing, all blending modes you could wish for, subpixel coordinates, advanced drawing operations like gradients and it is fast because it is fully hardware accelerated. All working over a very efficient wire protocol. E.g. Cairo uses Xrender as a backend.


It's not all hardware accelerated though is it. Both the X server and Cairo depend on the pixman library, which is a CPU/SIMD optimized pixel manipulation library.

Even xf86-video-intel, the intel X11 driver package, on my system depends on pixman.


Nothing on your feature list is incompatible with a client/server approach though. The only downside would be when large data blobs need to be sent over a network each frame.

In the end, modern 3D APIs are also just a 'wire protocol': rendering commands are recorded by the CPU into local command buffers and played back by the GPU (which is often connected to the CPU by a comparatively slow bus), the only limiting factor is the amount of data that's communicated between the CPU and GPU - and of course any additional latency that would be added by a network connection.


I wonder if there could be a X server implementation that ran fully on the GPU.


Pure-CPU is side effect of XFree86 being essentially "lowest common denominator" platform (and even then some stuff was accelerated) - it comes from X.Org X11 releases being "base code" for vendors to built their own.

For comparison, SGI used a wildly different architecture underneath, which heavily impacted performance - it's also why "xsgi" would default to indexed colour and you'd use 24bit/32bit visuals only for some windows - the server would use indexed colour visuals on the actual framebuffer to reduce memory bandwidth and composite everything together in hardware.

It's also why there were "X11 Overlays" for GL - it meant you could easily implement parts of the GUI in X11 and still render on-top of direct-rendered visuals that bypassed X11.


Interesting! I wonder if there are any writeups around with more details on the SGI X architecture?

But I was really thinking about going further with all or nearly all of the X server executing on the GPU with only cpu shims for input peripherals, networking etc.


The closest to this idea was less X11 and more the never actually fulfilled to my knowledge promise of NeXTDimension color board for NeXT Cube - which was supposed to implement the entire graphics stack on the embedded CPU.

X11 could be implemented similarly - essentially sticking a complete standalone "X11 terminal" on an Add-In Card - the current GPU architectures aren't necessarily fitting for implementing it directly, but combine essentially an embedded OS running on extra CPU handling the protocol interactions and I/O then use a possibly more tailored interface to talk to GPU (compare with AMD's promise of HSA, or Xbox post 360 or PS4/PS5 architectures where GPU and CPU are on common memory).

NeWS-style engine (possibly with something simpler than full postscript) would work great with that, especially if you had properly shared interface libs on it.


We could run a OS on current GPUs I think, hardware-wise. And we know from history that compilers can make up for various hardware shortcomings on OS support side. Eg multi-task scheduling and concurrency can be done at the compiler level.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: