The trouble with corporations is that they do have interests that are very independent of their customers and they are not good agents (principle-agent problem). RedHat, partly because they could not figure out better ways to monetize, has increasingly fought gadgets with gadgets, creating service contracts for support interfaces for open-core products and so on. This does not maximize the value delivery of open solutions.
Government is not known for speed or efficiency. Good luck getting the average Joe to understand why your little git repo needs to come out of his payroll. Even if you get something passed, now all Joe hears on the radio is about how you're stealing his paycheck. Less learned: narrow interests are easy political targets. Okay so let's do a foundation!
So how about foundations? Every single git repo needs a foundation? That's a lot of overhead. Foundations have a scope. They can also suffer from principle agent problems. Foundations are a good solution, but they themselves have not really adapted to the information age. Rigid, self-serving governance can easily become entrenched by insiders who beat the drum while cashing checks.
PrizeForge solve a lot of these problems just by being very broad in scope and very neutral as far as interests. More payment is better. If the market wins, we win. We don't really have to care who or why but we should try to protect customer value by making money smarter and creating the means of coordination so that nobody moves alone.
PrizeForge is not good yet. But it will be. Our solution for the principle-agent problems will completely change how we do social. To start, we've started operating our fund-matching systems. Those will help us bootstrap faster. We can serve some of the communities we know well while building up the rest of our features. (Log in after a few hours, I'm currently doing maintenance).
MAP_HUGETLB can't be used for mmaping files on disk, it can only be used with MAP_ANONYMOUS, with a memfd, or with a file on a hugetlbfs pseudo-filesystem (which is also in memory).
This is quite interesting since I, too, was under the impression that mmap cannot be used on disk-backed files with huge pages. I tried and failed to find any official kernel documentation around this, but I clearly remember trying to do this at work (on a regular ECS machine with Ubuntu) and getting errors.
Based on this SO discussion [1], it is possibly a limitation with popular filesystems like ext4?
If anyone knows more about this, I'd love to know what exactly are the requirements for using hugepages this way.
Cool! Thanks for the example. The aforementioned work thing requires MAP_SHARED as well which IIRC is the reason it would fail when used together with files and huge pages, but private mappings work as you show.
Trying to google this i found https://lwn.net/Articles/718102/ which suggests that there was discussion about it back in 2017. But i can't find anything else about it except a patchset that i guess wasnt merged (?). So maybe it was just a proposal that never made it in.
Honestly i never knew any of this i thought huge pages just worked for all of mmap.
Enthusiast-oriented motherboards often default enable Precision Boost Overdrive, causing higher power and temperature limits for longer periods. To run the CPU at “stock” you need to go in and disable that. Their default Load Line Calibration might be aggressive as well.
Which motherboards enable PBO out of the box? That’s crazy! I know that motherboard manufacturers set some sketchy default turbo durations for Intel CPUs back when Intel was cagey about the spec and let them get away with it, but I thought that AMD was stricter about such things.
Well, for one thing, massive methane leaks in Russian gas infrastructure means that Russian gas ii much worse for the environment than any other source of gas.
I think I agree with you broadly, but to make the counter argument:
A big part of the reason why nuclear power isn’t cost-effective nowadays is because those costs have been at least partially internalized. The US federal government has stopped producing cheap nuclear fuel by disassembling nuclear weapons. Nuclear plants need to pay for the cost of storing their spent fuel on site indefinitely. Plant operators need to pay into a federal disaster insurance pool.
This is just a talking point put about by climate deniers since dealing with climate change is cheaper than the alternative.
It's no different from Fox News headlines saying Medicare for all would cost X billion and not mentioning that business as usual would cost twice that.
An attempt to turn a popular, cheaper option into a scary bogeyman with selective lies.
These are per-connection bottlenecks, largely due to implementation choices in the Linux network stack. Even with vanilla Linux networking, vertical scale can get the aggregate bandwidth as high as you want if you don’t need 10G per connection (which YouTube doesn’t), as long as you have enough CPU cores and NIC queues.
Another thing to consider: Google’s load balancers are all bespoke SDN and they almost certainly speak HTTP1/2 between the load balancers and the application servers. So Linux network stack constraints are probably not relevant for the YouTube frontend serving HTTP3 at all.
reply