OpenStreetMap often has building outlines, but not building height. This would be a nice way to augment that data for visualisations (remember: OSM doesn't take auto-generated bot updates, so don't submit that to the primary source).
It varies. New public APIs or language features may take a long time, but changes to internals and missed optimizations can be fixed in days or weeks, in both LLVM and Rust.
Couple of things that are commonly misunderstood/unappreciated about this:
• Uninitialized bytes are not just some garbage random values, they're a safety risk. Heartbleed merely exposed unitialized buffers. Uninit buffers can contain secrets, keys, and pointers that help defeat ASLR and other mitigations. As usual, Rust sets the bar higher than "just be careful not to have this bug", and therefore the safe Rust subset requires making uninit impossible to read.
• Rust-the-language can already use uninitialized buffers efficiently. The main issue here is that the Rust standard library doesn't have APIs for I/O using custom uninitialized buffers (only for the built-in Vec, in a limited way). These are just musings how to design APIs for custom buffers to make them the most useful, ergonomic, and interoperable. It's a debate, because it could be done in several ways, with or without additions to the language.
> Uninitialized bytes are not just some garbage random values, they're a safety risk.
Only when read. Writing to "uninitialized" memory[1] and reading it back is provably secure[2], but doesn't work in safe Rust as it stands. The linked article is a proposal to address that via some extra complexity that I guess sounds worth it.
[1] e.g. using it as the target of a read() syscall
[2] Because it's obviously isomorphic to "initialization"
Obviously, initialized memory isn't an uninitialized memory any more.
There are fun edge cases here. Writing to memory through `&mut T` makes it initialized for T, but its padding bytes become de-initialized (that's because the write can be a memcpy that also copies the padding bytes from a source that never initialized them).
Note that if you have a `&mut T` then the memory must already be initialized for T, so writing to that pointer doesn't initialize anything new (although as you say it can deinitialize bytes, but that only matters if you use transmute or pointer casting to get access to those padding bytes somehow).
ADHD meds contain controlled substances, and there's an annual production quota for them set by the DEA. The quota is intentionally set very tightly, so it's easy to hit it when the demand increases even slightly above projections.
Most international pharmaceutical companies have some presence in the US, so the US quota has a world-wide effect.
Additionally, prescriptions are for very specific doses of specific variants of the meds. Because it's a controlled substance, pharmacies aren't allowed to use any substitutes (not even something common-sense like dispensing 2x30mg for a 60mg prescription). This makes shortages happen even before all of the quota runs out, because some commonly used doses run out sooner.
Why would they do anything about that? It’s their job to set and enforce quotas, not to ensure access. From their perspective, I’d imagine that tight quotas make them feel reassured that they’ve got a lid on diversion concerns.
It does sound like the quota-setting system was designed for an era where the “legitimate” growth wasn’t on the order of “10% a year for 15 years”:
You're right that the DEA's quota system prioritizes diversion control over access, and it's clearly stuck in a bygone era unfit for todays demand growth. But it's baffling that Big Pharma, with its lobbying muscle, hasn't pushed Congress to modernize this bottleneck. Surely they'd profit from looser quotas.
Instead of hoping for a Trump EO to nuke the DEA (literally or figuratively), why not redistribute Controlled Substance Act enforcement? Agencies like the FBI or HHS already handle overlapping domains. The DEA's rigid gatekeeping, especially on research and quotas, stifles innovation more than it curbs abuse.
Or if the court overturned Wickard v Filburn. The Federal power to regulate substances like this at all is based on a butterfly effect version of the commerce clause.
The US is adopting isolationist policies based on a nationalist ideology. The government is run by anti-intellectuals. The US economic policy is based on xitter rants, and flip-flops every week. The fickle vindictive ruler is personally attacking businesses that don't make him look good. It's clear that in the US the path to success is now loyalty. The president runs a memecoin.
It is not going to happen, this is just day-dreaming. Yes, I saw the news, but you can't compare a few tens of people wanting to leave the US for ideological reasons to millions of people that stay in the US because they can fare better and make more money or start new companies overnight because they have a great idea.
The US is not adopting isolationist policies. It's adopting more nationalistic policies, which is no different than how China has been running its economy (and politics in general) for decades. And specifically the four year Trump Administration is pursuing heavily nationalistic policies. There's no evidence the Democrats will keep much of Trump's policy direction, as certainly the Biden Admin and Trump Admin could hardly be more different.
Let me know where you see the US military pulling back from its global footprint. How many hundreds of global bases has the US begun closing? They're expanding US military spending as usual, not shrinking. The US isn't shuttering its military bases in Europe or Asia.
The US is currently trying to expedite an end to the Ukraine v Russia war, so it can pivot all of its resources to the last target standing in the Middle East: Iran. That's anything but isolationist.
Also, the US pursuing Greenland and the Panama Canal, is the opposite of isolationist. It's expansionist-nationalistic. It's China-like behavior (Taiwan, Hong Kong, South China Sea, Tibet).
I really like the WebGPU API. That's the API where the major players, including Apple and Microsoft, are forced to collaborate. It has real-world implementations on all major platforms.
With the wgpu and Google dawn implementations, the API isn't actually tied to the Web, and can be used in native applications.
The only reason I like WebGL and WebGPU is that they are the only 3D APIs where major players take managed language runtimes into consideration, because they can't do otherwise.
Had WebAssembly already been there without being forced to go through JavaScript for Web APIs, and most likely they would be C APIs where everyone and their dog are writing bindings insteads.
Now, it is pretty much a ChromeOS only API still, and only available across macOS, Android and Windows.
Safari and Firefox have it as preview, and who knows when it will ever it stable at a scale that doesn't require "Works best on Chrome" banners.
Support on GNU/Linux, even from Chrome, is pretty much not there, at least for something to use in production.
And then we have the whole drama that after 15 years, there are still no usable developer tools on browsers for 3D debugging, one is forced to either guess what rendering calls are from the browser or which are from the application, GPU printf debugging, or having a native version that can be plugged into Renderdoc or similar.
People pick the best option, while worse option can creep from being awful to just a close second, and then suddenly become the best option.
There's a critical point at which there's enough EV infrastructure to overcome objections, available cars become cheap enough, and then there's hardly any reason to pick gas cars that are slower, laggier, noisier, smelly, more expensive to run and can't be refuelled at home.
Sort of. While electric cars are great, the type of person who buys a $3,000 car cannot afford the cheapest electric car for about 10-15 years after that tipping point, even after you account for gas savings. So new cars are likely to switch suddenly, it still will be a decade before that catches up. The average car in the US is 12 years old.
Even the type of person who buys a 3 year old car cannot (will not?) afford a payments on a new car accounting for the gas savings. They will buy what they can get - but they also will influence the market as they are likely to be sensible (often a new car is not sensible) and so willing to pay extra for the EV, and this in turn will put pressure on the new cars since trade in value is very important to most people who buy a new car (which is sensible, but it is the banks forcing this on the buyers)
Maybe? I can see what you’re saying, but the real world can move as slow as sludge at times. These aren’t smartphones that are relatively easily produced, shipped, and purchased by users.
Second order effects like load on an aging power grid could easily cause speed bumps.
I hope you’re right, but I don’t know I could bet on it
HDR when it works properly is nice, but nearly all HDR LCD monitors are so bad, they're basically a scam.
The high-end LCD monitors (with full-array local dimming) barely make any difference, while you'll get a lot of downsides from bad HDR software implementations that struggle to get the correct brightness/gamma and saturation.
IMHO HDR is only worth viewing on OLED screens, and requires a dimly lit environment. Otherwise either the hardware is not capable enough, or the content is mastered for wrong brightness levels, and the software trying to fix that makes it look even worse.
Most "HDR" monitors are junk that can't display HDR. The HDR formats/signals are designed for brightness levels and viewing conditions that nobody uses.
The end result is a complete chaos. Every piece of the pipeline doing something wrong, and then the software tries to compensate for it by emitting doubly wrong data, without even having reliable information about what it needs to compensate for.
What we really need is some standards that everybody follows. The reason normal displays work so well is that everyone settled on sRGB, and as long as a display gets close to that, say 95% sRGB, everyone except maybe a few graphics designers will have a n equivalent experience.
But HDR, it's a minefield of different display qualities, color spaces, standards. It's no wonder that nobody gets it right and everyone feels confused.
HDR on a display that has peak brightness of 2000 nits will look completely different than a display with 800 nits, and they both get to claim they are HDR.
We should have a standard equivalent to color spaces. Set, say, 2000 nits as 100% of HDR. Then a 2000 nit display gets to claim it's 100% HDR. A 800 nit display gets to claim 40% HDR, etc. A 2500 nit display could even use 125% HDR in it's marketing.
It's still not perfect - some displays (OLED) can only show peak brightness over a portion of the screen. But it would be an improvement.
DisplayHDR standard is supposed to be it, but they've ruined its reputation by allowing HDR400 to exist when HDR1000 should have been the minimum.
Besides, HDR quality is more complex than just max nits, because it depends on viewing conditions and black levels (and everyone cheats with their contrast metrics).
OLEDs can peak at 600 nits and look awesome — in a pitch black room. LCD monitors could boost to 2000 nits and display white on grey.
We have sRGB kinda working for color primaries and gamma, but it's not the real sRGB at 80 nits. It ended up being relative instead of absolute.
A lot of the mess is caused by the need to adapt content mastered for pitch black cinema at 2000 nits to 800-1000 nits in daylight, which needs very careful processing to preserve highlights and saturation, but software can't rely on the display doing it properly, and doing it in software sends false signal and risks display correcting it twice.
CPUs evolved to execute C-like code quickly. They couldn't dramatically change the way C interfaces with the CPU, so they had to change the hidden internals instead.
For example, CPUs didn't have an option to hide DRAM latency with a SIMT architecture, so they've went for complex opaque branch prediction and speculative execution instead.
The way C is built and deployed in practice didn't leave room for recompiling code for a specific CPU, so explicit scheduling like VLIW failed. Instead there's implicit magic that works with existing binaries.
When there were enough transistors to have more ALUs, more registers, more of everything in parallel, C couldn't target that. So CPUs got increasingly complex OoO execution, hidden register banks, and magic handling of stack as registers.
Contrast this with the current GPUs that have register-like storage available that is explicitly divided between threads (sort of like 6502's zero page – something that C couldn't target well either!)
reply