And there may be general resistance mechanisms that hit more than one chemical (like changes in membrane permeability and efflux pumps.) Over time, with more exposure, the costs can be expected to decline as resistance is optimized.
Ultimately resistance can evolve that kicks in only on exposure to the chemicals in question. Bacteria already do this with, say, the enzyme needed to metabolize lactose. The gene isn't expressed until lactose is present.
I think the AI bubble may have some interesting parallels with the dot com bubble ~25 years ago.
The internet was revolutionary and transformed the global economy. However, most of the internet companies at the time were garbage and were given money because people were blinded by the hype. At the end of the day, we were left with a handful of viable companies that went on to great things and a lot of embarrassed investors
I think that’s a great analogy (and I was doing software then).
We know machine learning is a big deal, it’s been a big deal for many years, so of course recent breakthroughs are going to be likewise important.
The short term allocation of staggering amounts of money into one category of technology (Instruct-tuned language model chat bots) is clearly not the future of all technology, and the AGI thing is a weird religion at this point (or rather a radical splinter faction of a weird religion).
But there is huge value here and it’s only a matter of time until subsequent rounds of innovation realize that value in the form of systems that complete the recipe by adding customer-focused use cases to the technology.
Everyone knew the Internet was going to be big, but the Information Superhighway technology CEOs were talking about in the late 90s is just kind of funny now. We’re still glad they paid for all that fiber.
And a lot of the products that ended up mattering were founded in the decade after the dot com bubble: Facebook 2004, Youtube 2005, Twitter 2006, Spotify 2006, Whatsapp 2009, etc.
A hype bubble is great to pump money into experimentation and infrastructure, but the real fruits of that typically come later when everything had a chance to mature.
A similar thing happened with computer vision and CNNs. There was a big hype when "detect if there's an eagle in this image" turned from a multi-year research project to something your intern could code up on the weekend. But most of the useful/profitable industry applications only happened later when the dust was settled and the technology matured.
They were garbage in hindsight. Being "blinded by the hype" is what drives people to try new things and fail. And that's okay! It's okay that we ended up with a handful of viable companies. Those viable companies emerged because people tried new things, and failed. Investors lost money because investment has the risk of loss.
From a business perspective this is right. Unless OpenAI creates AGI they'll probably never make a dime. Great products do not lead inevitably to great profits.
I think the focus on AGI is misguided, at least in the short run. There's profit to be made in specialized intelligence, especially dull, boring stuff like understanding legal contracts or compliance auditing. These AI models have plenty of utility that can be profitably rented out, even if their understanding of the world is far short of general intelligence.
Even just replacing 10% of first-line customer service is a gigantic market opportunity.
Everyone tried the first time by adding stupid menus that you have to navigate with numbers, then they made it recognize spoken words instead of numbers, now everyone is scrambling to get those to be "intelligent" enough to take actual questions and answer the most frequently occurring ones in a manner that satisfies customers.
If I know what my data look like, I can choose an order of summation that reduces the error. I wouldn't want the compiler by default to assume associativity and introduce bugs. There's a reason this reordering is disabled by default
Not the least that you should get the same result from your code every time, even if you compile it with a different compiler. Doesn't matter if that result is somehow better or worse or “difference is so small that it doesn't matter”; it needs to be exactly the same, every time.
I think there’s often a disconnect in these discussions between people that work on libraries and people that work on the application code.
If you have written a library, the user will provide some inputs. While the rounding behavior of floating point operations is well defined, for arbitrary user input, you can’t usually guarantee that it’ll go either way. Therefore, you need to do the numerical analysis given users inputs from some range, if you want to be at all rigorous. This will give results with error bounds, not exact bit patterns.
If you want exact matches for your tests, maybe identify the bits that are essentially meaningless and write them to some arbitrary value.
Edit: that said I don’t think anybody particularly owns rigor on this front, given that most libraries don’t actually do the analysis, lol.
Rust (which is what we're discussing here) actually doesn't promise this in general. But for the three operating systems you mentioned that is in fact what it delivers because as another commenter mentioned it's table stakes. If your OS can't do this it's a toy OS.
The Windows and Linux solutions are by Mara Bos (the MacOS one might be too, I don't know)
The Windows one is very elegant but opaque. Basically Microsoft provides an appropriate API ("Slim Reader/Writer Locks") and Mara's code just uses that API.
The Linux one shows exactly how to use a Futex: if you know what a futex is, yeah, Rust just uses a futex. If you don't, go read about the Futex, it's clever.