Hacker News new | past | comments | ask | show | jobs | submit | bloaf's comments login

For an example of harder-than-np-complete, I was shocked to find out how hard vector reachability is after being presented with a vector reachability problem and assuming I could just look up a reasonable-time algorithm.

I incorrectly assumed it would have some basic linear algebra solution because of how simple the problem seemed.


There are some choices you can make that seem innocuous but dramatically increase the chances of lethal outcomes, like using corn or coconut.

https://www.youtube.com/watch?v=yXnSYfv6bCA https://en.wikipedia.org/wiki/Bongkrek_acid


Thank you, I would never thought about using corn for such stuff, but that coconut surprised me


This is not at all obvious. Freezing veggies involves washing, cutting, and blanching processes and vegetables may be subjected to ultrasound during freezing to accelerate the process.


That's why I gravitate towards the 'hyper-palatable' label vs 'hyper-processed', to me it captures a more plausible set of criteria (engineered via fat/sugar/salt addition to maximize its appeal, etc) that cause a more plausible and specific set of problems (hijacking reward pathways to cause overeating, etc)


There is a Peter Attia interview with Michael Easter that explores this concept and anecdotally supports it, where he investigates a tribe (Tsimane tribe) with low obesity and cardiovascular disease, and his personal experience eating their plain and unseasoned diet compared to normal western foods.

I normally dislike the typical podcasters/podcasts because of their self promoting and low information density nature but I thought this one was ok to recommend.


If you want it from an industrial perspective, I suggest checking out “Salt Sugar Fat: How the Food Giants Hooked Us“ by Michael Moss, and “The End of Overeating: Taking Control of the Insatiable American Appetite” by David A. Kessler, former head of the FDA.


This is a shortsighted view. There are plenty of counterpoints, from things like documenting slaughterhouse practices to conservatives implementing policies while simultaneously limiting research into the outcomes of those policies.

Obviously most of the 'AI researchers' right now are not altruistic, but it is possible to take the position that advancing AI will be sufficiently valuable to society that it overrides corporate preferences against bulk scraping.


Truly, the Tao was alive in that company.

https://www.mit.edu/~xela/tao.html


This guy’s videos are consistently great, they get a lot more technical than most other edutainment without getting bogged down.


So there is a pretty obvious analogy in chemistry: activation energy.

https://en.wikipedia.org/wiki/Activation_energy

The ELI5 version is that atoms are all trying to find a comfy place to be. Typically, they make some friends and hang out together, which makes them very comfy, and we call the group of friend-atoms a molecule. Sometimes there are groups of friendly atoms that would be even comfier if they swapped a few friends around, but losing friends and making new friends can be scary and seem like it won't be comfy, so it takes a bit of a push to convince the atoms to do it. That push is precisely activation energy, and the rearrangement won't happen without it (modulo quantum tunneling but this is the ELI5 version.)

In the software world, everyone is trying to make "good" software. Just like atoms in molecules, our ideas and systems form bonds with other ideas and systems where those bonds seem beneficial. But sometimes we realize there are better arrangements that weren't obvious at the outset, so we have to break apart the groupings that formed originally. That act of breakage and reforming takes energy, and is messy, and is exactly what this author is writing about.


I’ve been running some LLMs on my 5600x and 5700g cpus, and the performance is… ok but not great. Token generation is about “reading out loud” pace for the 7&13 B models. I also encounter occasional system crashes that I haven’t diagnosed yet, possibly due to high RAM utilization, but also possibly just power/thermal management issues.

A 50% speed boost would probably make the CPU option a lot more viable for home chatbot, just due to how easy it is to make a system with 128gb RAM vs 128gb VRAM.

I personally am going to experiment with the 48gb modules in the not too distant future.


You could put an 8700G in the same socket. The CPU isn't much faster but it has the new NPU for AI. I'm thinking about this upgrade to my 2400G but might want to wait for the new socket and DDR5.


I think you mixed up your sockets. 8700G is AM5.


I looked at upgrading my existing AMD based system's ram for this purpose, but found out my mobo/cpu only supports 128gb of ram. Lots, but not as much as I had hoped I could shove in there.


So I’m going to play the devil’s advocate and say that concise code has a readability advantage in that you don’t need to keep track of intermediate variables or other state across hundreds of lines of code or multiple files.

This was/is the promise of languages like APL; those willing to invest in learning arcane and terse symbols can move mountains in a few keystrokes.

I know that for my part, when reading a language I’m familiar with, it’s usually much faster to puzzle out a concise solution than a verbose-in-the-name-of-simplicity one.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: