Hacker News new | past | comments | ask | show | jobs | submit | more barbarr's comments login

The map is especially hilarious! You can infer the path of the eclipse from it. [0]

[0] https://trends.google.com/trends/explore?date=now%201-d&geo=...


Vermont appears to have been sunny.

Most of TX was cloudy as London in winter.


The path of the eclipse, and then randomly Chico/Redding.


Error 429? Did google trends get hugged to death?


Agreed. And honestly, even if PETA got out of the shelter business, anti-PETA folks will just find some other marginal edge cases dismiss them entirely. [0]

I'm glad they are sticking to their principles and doing the right thing. Listing to popular opinion never works since people will always hunt for reasons to criticize. If popular opinion had its way with the animal rights movement, there would be no animal rights movement.

[0] https://news.ycombinator.com/item?id=24386195


Similar experience here. My PhD research was in a niche (but unhyped) field where it was easy to pump out low-impact papers. My peers who went into more hyped fields had much more citations and much better job outcomes, whether in academia or in industry.

If you want to do high-impact research in the long run, you need to have a strong foundation, which means a solid bed of high-impact papers to point to when you need funding, opportunities, etc.


Nvidia is really embracing the hype train I see...

I'm assuming this model is in the same vein as ECMWF, GFS, ICON, CMC, and NAVGEM.

Imagine if the NOAA or an academic lab released such a model called it "Earth 2.0" or "Earth's digital twin". It would just feel... like hubris? Not sure why (or how) Nvidia gets away with such messaging.


Exactly. Different tech and different buzzwords over the decades but the idea of building a global GIS or a spatial model of the world is not at all revolutionary. Not by half a century. "Digital Twin" is just the latest buzzword, and it's falling short of expectaations per usual. I was hoping the AI buzz would kill the digital twin. But now it looks like we're getting "AI-enabled digital twin". Hype trains a rollin'.


NASA has a project they call the Earth System Digital Twin (ESDT): https://esto.nasa.gov/earth-system-digital-twin/. The ESA also uses the same language at https://www.esa.int/ESA_Multimedia/Images/2020/09/Digital_Tw...


Oh, I had no clue. The Tony Stark-esque spinning earth graphics are hilarious. Guess the hype train catches everyone then.


The Copernicus project (esa, ecmwf) also markets some products as „digital twins“


Hubris is allowed in marketing


These success of these avian flus are a direct result of mega factory farms ("concentrated animal feeding operations", or CAFOs) [0]. These massive operations are actually a relatively new thing since the 1990's [1].

We need to stop concentrating large numbers of animals at single sites.

[0] https://www.liebertpub.com/doi/abs/10.1089/vbz.2006.6.338 [1] https://www.theguardian.com/environment/2023/jan/31/us-dairy...


The only way to change these practices is to make it more profitable by doing as you suggest (which I agree needs to be done). What are the carrots and sticks that could be used in such a situation?


You don't "carrot and stick" the construction industry to make buildings that are safe and accessible, the airplane industry to build and maintain safe planes, the auto industry to have have safe cars, and so on. Laws and regulations dictate certain levels of quality, and people follow them or go to prison. This has been incredibly effective and is the simple solution to this problem. It would be preferable for everyone to win, but it doesn't have to be more profitable - the dangerous behavior simply has to cease.

Bluntly, you don't carrot and stick criminals not to commit crimes. You make the behavior illegal and you punish criminals.


You just talked about sticks.

Carrots (in this case) would be subsidies and possible regulatory easements that might lesson relevant burdens if they are overly broad or excessively painful to follow (i.e., paperwork).


I imagine that you can draw pictures with this, where dark colors are represented by a lot of small bubbles. Not sure what the algorithm for that would look like though.


1. Click all bubbles down to a certain size

Now you have a regular grid of pixels. Take your favorite monochromatic pixel art of size 2^k x 2^k, and

2. Click on grid locations corresponding to black pixels


I used to write my emails with ChatGPT's help for a while, but looking back it's quite cringe because they're so obviously ChatGPT-written, even when I thought they weren't. Now I've given up on using AI assistants for emails completely, because I really don't want the same thing to happen again in retrospect.


Still beats my former boss having GPT write all sorts of important comms, including a "farewell message" for me when I left, using GPT. We could all tell but I don't think he realized it.


I love these types of improvements - I figure they don't really have a practical application (except for shaving a few milliseconds off ultra large matrices), but they're a testament to human ingenuity and the desire to strive for perfection no matter how far out of reach.


Strassen's algorithm is rarely used: its primary use, to my understanding, is in matrix algorithms of more exotic fields than reals or complex numbers, where minimizing multiplies is extremely useful. Even then, Strassen only starts to beat out naive matrix multiply when you're looking at n >= 1000's--and at that size, you're probably starting to think about using sparse matrices where your implementation strategy is completely different. But for a regular dgemm or sgemm, it's not commonly used in BLAS implementations.

The more advanced algorithms than Strassen's are even worse in terms of the cutover point, and are never seriously considered.


The cross-over can be around 500 (https://doi.org/10.1109/SC.2016.58) for 2-level Strassen. It's not used by regular BLAS because it is less numerically stable (a concern that becomes more severe for the fancier fast MM algorithms). Whether or not the matrix can be compressed (as sparse, fast transforms, or data-sparse such as the various hierarchical low-rank representations) is more a statement about the problem domain, though it's true a sizable portion of applications that produce large matrices are producing matrices that are amenable to data-sparse representations.


How big are the matrixes in some modern training pipelines? We always talk of absurdly large parameter spaces.


SGD keeps the matrices small, I think.


No, these aren't practical algorithms. No one uses them in reality.

The hope is that these small improvements will lead to new insights that will then lead to big improvements. I doubt how matrix multiplication will be done will really change regardless of these theoretical results, unless something really groundbreaking and shocking is discovered.

It's all relatively pretty in the grand scheme of things.


I think the last thing I saw on matrices was looking toward optimizing slightly larger sub-problems.

That smells a little bit like a loop unrolling and/or locality improvement to me, You can often beat a theoretical algorithmic improvement with a practical one in such situations. And if you do a little bit of both you hopefully end up with something that's faster than the current most practical implementation.


There's a hell of a lot of matrix multiplications going on in AI/ML. Shaving off a few milliseconds for ultra large matrices could save power and money for the big players like Google, Microsoft/OpenAI and Meta.


Those are by and large many small matrices being multiplied, these results have nothing to do with that.


The bottleneck in those is not the arithmetic operation but the memory bandwidth once you have to spill your matrix out of SRAM.

As it stands right now, it is actually better to have a slower algorithm that uses the local memory more efficiently.


Hope this gains some traction in Congress. Shitflation is a thing that's been happening as well. [0]

[0] https://jonatron.github.io/shitflation/


As someone who didn't know what JPEG XL was before reading this, it sounded to me like some specialized file format for huge images. Ironically, this would lead me to the right choice of file type if I had a large image.

I do admit, though, that if I saw a folder full of .jpeg and .jpegxl files before reading this, I'd expect the .jpegxl ones to be larger on average. Or if someone said "should I send you the .jpeg or the .jpegxl" and I was low on disk space, I'd find it plausible to say "hmm, no reason to send the large version, the jpeg should be fine".


Small nitpick: The file extension for JPEG XL is .jxl.


The jxl-agen


> Ironically, this would lead me to the right choice of file type if I had a large image.

This is it, I think it works out just fine in the end.


You probably missed this though:

> Or if someone said "should I send you the .jpeg or the .jpegxl" and I was low on disk space, I'd find it plausible to say "hmm, no reason to send the large version, the jpeg should be fine".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: