Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

io_uring just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it. The idea that you can just hand off infinite amounts of work for the kernel to do on your behalf is pretty fundamentally broken. It is a concrete implementation of wishful thinking.


That analysis would seem smart but let's try a game of Mad Libs:

The Linux Kernel just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

KDE just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

Firefox just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

Chrome just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

Windows just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

Photoshop just racked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

All CPUs just rucked up another CVE, so I kinda feel that its severely under-designed nature will always haunt it.

What's the theme? Racking up CVEs is something all software & hardware does. Mistakes can happen in design and in implementation and no one is immune. Using presence of CVEs as an indication of immaturity / fundamental design flaw isn't helpful. In fact, it's probably the opposite. Software that has no CVEs probably just means no one is paying attention to it. Sure, in a theoretical case maybe you've built a formal proof and translated that into a memory safe language somehow (& you assume you've made no mistakes modelling your entire system in your proof), then maybe. However, that encompasses 0% of all software.

> The idea that you can just hand off infinite amounts of work for the kernel to do on your behalf is pretty fundamentally broken. It is a concrete implementation of wishful thinking

How is that any different from a file descriptor? The kernel is free to setup limits on how much work you can have outstanding at any given time (now maybe those bits are missing right now, but it doesn't feel like an intractable problem).


All "work" you want to do that interfaces with anything on an OS is handed off to the kernel; want to read a file? kernel, want to sleep for a while? kernel, etc. Besides things like network traffic is also asynchronous like io_uring (even if the socket() interfaces make it look somewhat synchronous). Outside of toy system asynchronicity is always a thing, especially when running on multiple cores.

I kind of get where you are coming from but at the same time, the kernel always gets the last say, so as long as io_uring has a good design and implementation it will always be just as good or bad as the OS as a whole. Whether run of the mill programmers are up to the task of being able to properly conceptualise and use such an OS is probably not the same thing.


Yeah but it's not well-designed, that's my point. It has obliviously shrugged off the tricky question of object lifetime, that's why it has already collected 16 different CVEs for things like use-after-free. Considering its short history, io_uring has already rocketed to the top of the list of dangerous kernel features.


with linux 6.0, lsm got the ability to filter io_uring. deny all and carry on.


Can you provide a source? Eager to read about how this is done.



Are they just going to keep adding hooks to every new system call implemented for io_uring?


my understanding is that this hook is the only one, but to properly secure io_uring one has to implement a check for every call here. or just disable it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: