Something about that headline really irks me. I think GitHub is an amazing place for people to share code, I also think it’s really nice of them to do this.
But the maintainers aren’t “their” maintainers. They are maintainers using GitHub for their projects.
Probably just me overreacting, just thought I’d mention it.
They're referring to maintainers of projects that GitHub relies on for their own work, so calling them "our maintainers" isn't much of a linguistic stretch.
Without having looked into it too deeply I feel that they are somewhat “cheating” by using a superserver to launch a new process for each connection, thus letting the OS handle the dynamic allocation needed for each connection.
Still pretty impressive project. Would be fun to take a deeper look at it at some point.
> […] thus letting the OS handle the dynamic allocation needed for each connection.
This is what PHK did when designing Varnish (IIRC): instead of dealing with lots of files on its own (like Squid), just create some files and do a malloc() on them and let the OS do the work:
I think the original point still stands. When the application tries to handle memory pressure itself by writing data structures to disk, it will hit the case where the kernel has already paged that memory out, has to reload it, only to write it to disk and free the memory afterwards.
It makes sense though - no memory management simplifies the codebase. Letting the host deal with it instead means you get the niceties of process level isolation and less complexity. More eyes are on the OS level code than would be on this project. It seems very clever to me.
I have one of the full size Ploopy trackballs, and it feels really good in the hand. It is 3D-printed, and it shows, but the texture of it feels pretty nice when using it.
I struggled a bit with accuracy at first, but now I’ve lowered the DPI to around 500 and it’s become much more usable for me since.
It's worth to know that the watch also tracks HRV whenever you use the Breathe app, so if you want to get consistent HRV readings the easiest way is to use Breathe once a day (for example right when you wake up).
> Could I have done all that in a single shell, then had it automatically cleaned up when I was done?
Yes, that is a pretty standard workflow for most nix users. You either set up a shell.nix for your project with all of its dependencies, or if you need a certain tool once you just write for example: ‘nix shell -p iotop’ to enter a shell where iotop is in the path.
Why would anyone use rdrand directly? Seems like user space applications should use getrandom() or /dev/urandom and the kernel should use rdrand as a complementary random number source in its random number generator.
No user space program should need to use rdrand directly at all.
Indeed, because RDSEED is actually what most people want anyway.
Its an assembly instruction that gets the job done. People should be expecting that the assembly instructions of their CPUs work as intended. No different than using AVX-intrinsics or hand-crafted assembly in x264 / x265 code.
In any case, RDSEED is the assembly instruction for gathering entropy (aka: setting a random number generator should use RDSEED), while RDRAND is an older assembly instruction for purely getting a cryptographic random number. Its slightly different amounts of entropy involved in RDSEED vs RDRAND. So this is a very subtle issue that requires a lot of understanding of the x86 assembly instruction set.
But if you understand these details, then by golly you should use the instructions!
I believe Wireguard is implemented in the kernel. I'd argue there should be a kernel-wide wrapper function for this sort of thing, or if that already exists, Wireguard should probably use it.
That doesn't explain why systemd uses it, of course.
WireGuard does use the right wrapper -- get_random_u32(). The issue is that the implementation will just use whatever the architecture-provided randomness source provides if it's available[1]. That's the real bug.
The problem with creating such wrapper function is that someone like systemd/Wireguard developers will doubtlessly exploit it to drain entropy pool (whatever that means), at which point kernel drivers may start locking up, waiting for more entropy to appear.
In comparison get_random_u32() is safe to call at any point — including early boot — and does not affect global entropy pool. At worst it may return low-quality numbers, but that can be easily fixed by running your own peudo-random generator on top of it (which is a good idea anyway because you don't want your kernel module to contend with other parties for RNG ownership).
systemd may be running in an environment where /dev/urandom isn't available, and getrandom() will either block or return nothing depending on whether the entropy pool has been initialised so you still need a fallback if you're working in the early boot process.
But the maintainers aren’t “their” maintainers. They are maintainers using GitHub for their projects.
Probably just me overreacting, just thought I’d mention it.