Agreed. And honestly, even if PETA got out of the shelter business, anti-PETA folks will just find some other marginal edge cases dismiss them entirely. [0]
I'm glad they are sticking to their principles and doing the right thing. Listing to popular opinion never works since people will always hunt for reasons to criticize. If popular opinion had its way with the animal rights movement, there would be no animal rights movement.
Similar experience here. My PhD research was in a niche (but unhyped) field where it was easy to pump out low-impact papers. My peers who went into more hyped fields had much more citations and much better job outcomes, whether in academia or in industry.
If you want to do high-impact research in the long run, you need to have a strong foundation, which means a solid bed of high-impact papers to point to when you need funding, opportunities, etc.
Nvidia is really embracing the hype train I see...
I'm assuming this model is in the same vein as ECMWF, GFS, ICON, CMC, and NAVGEM.
Imagine if the NOAA or an academic lab released such a model called it "Earth 2.0" or "Earth's digital twin". It would just feel... like hubris? Not sure why (or how) Nvidia gets away with such messaging.
Exactly. Different tech and different buzzwords over the decades but the idea of building a global GIS or a spatial model of the world is not at all revolutionary. Not by half a century. "Digital Twin" is just the latest buzzword, and it's falling short of expectaations per usual. I was hoping the AI buzz would kill the digital twin. But now it looks like we're getting "AI-enabled digital twin". Hype trains a rollin'.
These success of these avian flus are a direct result of mega factory farms ("concentrated animal feeding operations", or CAFOs) [0]. These massive operations are actually a relatively new thing since the 1990's [1].
We need to stop concentrating large numbers of animals at single sites.
The only way to change these practices is to make it more profitable by doing as you suggest (which I agree needs to be done). What are the carrots and sticks that could be used in such a situation?
You don't "carrot and stick" the construction industry to make buildings that are safe and accessible, the airplane industry to build and maintain safe planes, the auto industry to have have safe cars, and so on. Laws and regulations dictate certain levels of quality, and people follow them or go to prison. This has been incredibly effective and is the simple solution to this problem. It would be preferable for everyone to win, but it doesn't have to be more profitable - the dangerous behavior simply has to cease.
Bluntly, you don't carrot and stick criminals not to commit crimes. You make the behavior illegal and you punish criminals.
Carrots (in this case) would be subsidies and possible regulatory easements that might lesson relevant burdens if they are overly broad or excessively painful to follow (i.e., paperwork).
I imagine that you can draw pictures with this, where dark colors are represented by a lot of small bubbles. Not sure what the algorithm for that would look like though.
I used to write my emails with ChatGPT's help for a while, but looking back it's quite cringe because they're so obviously ChatGPT-written, even when I thought they weren't. Now I've given up on using AI assistants for emails completely, because I really don't want the same thing to happen again in retrospect.
Still beats my former boss having GPT write all sorts of important comms, including a "farewell message" for me when I left, using GPT. We could all tell but I don't think he realized it.
I love these types of improvements - I figure they don't really have a practical application (except for shaving a few milliseconds off ultra large matrices), but they're a testament to human ingenuity and the desire to strive for perfection no matter how far out of reach.
Strassen's algorithm is rarely used: its primary use, to my understanding, is in matrix algorithms of more exotic fields than reals or complex numbers, where minimizing multiplies is extremely useful. Even then, Strassen only starts to beat out naive matrix multiply when you're looking at n >= 1000's--and at that size, you're probably starting to think about using sparse matrices where your implementation strategy is completely different. But for a regular dgemm or sgemm, it's not commonly used in BLAS implementations.
The more advanced algorithms than Strassen's are even worse in terms of the cutover point, and are never seriously considered.
The cross-over can be around 500 (https://doi.org/10.1109/SC.2016.58) for 2-level Strassen. It's not used by regular BLAS because it is less numerically stable (a concern that becomes more severe for the fancier fast MM algorithms). Whether or not the matrix can be compressed (as sparse, fast transforms, or data-sparse such as the various hierarchical low-rank representations) is more a statement about the problem domain, though it's true a sizable portion of applications that produce large matrices are producing matrices that are amenable to data-sparse representations.
No, these aren't practical algorithms. No one uses them in reality.
The hope is that these small improvements will lead to new insights that will then lead to big improvements. I doubt how matrix multiplication will be done will really change regardless of these theoretical results, unless something really groundbreaking and shocking is discovered.
It's all relatively pretty in the grand scheme of things.
I think the last thing I saw on matrices was looking toward optimizing slightly larger sub-problems.
That smells a little bit like a loop unrolling and/or locality improvement to me, You can often beat a theoretical algorithmic improvement with a practical one in such situations. And if you do a little bit of both you hopefully end up with something that's faster than the current most practical implementation.
There's a hell of a lot of matrix multiplications going on in AI/ML. Shaving off a few milliseconds for ultra large matrices could save power and money for the big players like Google, Microsoft/OpenAI and Meta.
As someone who didn't know what JPEG XL was before reading this, it sounded to me like some specialized file format for huge images. Ironically, this would lead me to the right choice of file type if I had a large image.
I do admit, though, that if I saw a folder full of .jpeg and .jpegxl files before reading this, I'd expect the .jpegxl ones to be larger on average. Or if someone said "should I send you the .jpeg or the .jpegxl" and I was low on disk space, I'd find it plausible to say "hmm, no reason to send the large version, the jpeg should be fine".
> Or if someone said "should I send you the .jpeg or the .jpegxl" and I was low on disk space, I'd find it plausible to say "hmm, no reason to send the large version, the jpeg should be fine".
[0] https://trends.google.com/trends/explore?date=now%201-d&geo=...